2.3. Clustering - Scikit-Learn 1
2.3. Clustering - Scikit-Learn 1
html#affinity-propagation
2.3. Clustering
Clustering of unlabeled data can be performed with the module sklearn.cluster.
Each clustering algorithm comes in two variants: a class, that implements the fit method to learn the clusters on train data, and a func-
tion, that, given train data, returns an array of integer labels corresponding to the different clusters. For the class, the labels over the
training data can be found in the labels_ attribute.
Input data
One important thing to note is that the algorithms implemented in this module can take different kinds of matrix as input. All the
methods accept standard data matrices of shape (n_samples, n_features) . These can be obtained from the classes in the
sklearn.feature_extraction module. For AffinityPropagation, SpectralClustering and DBSCAN one can also input similarity matrices of
shape (n_samples, n_samples) . These can be obtained from the functions in the sklearn.metrics.pairwise module.
1 of 24 02-10-2021, 12:40
2.3. Clustering — scikit-learn 1.0 documentation https://scikit-learn.org/stable/modules/clustering.html#affinity-propagation
Non-flat geometry clustering is useful when the clusters have a specific shape, i.e. a non-flat manifold, and the standard euclidean dis-
tance is not the right metric. This case arises in the two top rows of the figure above.
Gaussian mixture models, useful for clustering, are described in another chapter of the documentation dedicated to mixture models.
KMeans can be seen as a special case of Gaussian mixture model with equal covariance per component.
Transductive clustering methods (in contrast to inductive clustering methods) are not designed to be applied to new, unseen data.
2.3.2. K-means
The KMeans algorithm clusters data by trying to separate samples in n groups of equal variance, minimizing a criterion known as the iner-
tia or within-cluster sum-of-squares (see below). This algorithm requires the number of clusters to be specified. It scales well to large
number of samples and has been used across a large range of application areas in many different fields.
The k-means algorithm divides a set of samples into disjoint clusters , each described by the mean of the samples in the
cluster. The means are commonly called the cluster “centroids”; note that they are not, in general, points from , although they live in
the same space.
The K-means algorithm aims to choose centroids that minimise the inertia, or within-cluster sum-of-squares criterion:
Inertia can be recognized as a measure of how internally coherent clusters are. It suffers from various drawbacks:
Inertia makes the assumption that clusters are convex and isotropic, which is not always the case. It responds poorly to elongated
clusters, or manifolds with irregular shapes.
Inertia is not a normalized metric: we just know that lower values are better and zero is optimal. But in very high-dimensional spaces,
Euclidean distances tend to become inflated (this is an instance of the so-called “curse of dimensionality”). Running a dimensionality
reduction algorithm such as Principal component analysis (PCA) prior to k-means clustering can alleviate this problem and speed up
the computations.
Toggle Menu
2 of 24 02-10-2021, 12:40
2.3. Clustering — scikit-learn 1.0 documentation https://scikit-learn.org/stable/modules/clustering.html#affinity-propagation
K-means is often referred to as Lloyd’s algorithm. In basic terms, the algorithm has three steps. The first step chooses the initial centroids,
with the most basic method being to choose samples from the dataset . After initialization, K-means consists of looping between the
two other steps. The first step assigns each sample to its nearest centroid. The second step creates new centroids by taking the mean
value of all of the samples assigned to each previous centroid. The difference between the old and the new centroids are computed and
the algorithm repeats these last two steps until this value is less than a threshold. In other words, it repeats until the centroids do not
move significantly.
K-means is equivalent to the expectation-maximization algorithm with a small, all-equal, diagonal co-
variance matrix.
The algorithm can also be understood through the concept of Voronoi diagrams. First the Voronoi dia-
gram of the points is calculated using the current centroids. Each segment in the Voronoi diagram be-
comes a separate cluster. Secondly, the centroids are updated to the mean of each segment. The algo-
rithm then repeats this until a stopping criterion is fulfilled. Usually, the algorithm stops when the rela-
tive decrease in the objective function between iterations is less than the given tolerance value. This is
not the case in this implementation: iteration stops when centroids move less than the tolerance.
Given enough time, K-means will always converge, however this may be to a local minimum. This is highly dependent on the initialization
of the centroids. As a result, the computation is often done several times, with different initializations of the centroids. One method to
help address this issue is the k-means++ initialization scheme, which has been implemented in scikit-learn (use the init='k-means++'
parameter). This initializes the centroids to be (generally) distant from each other, leading to probably better results than random initial-
ization, as shown in the reference.
K-means++ can also be called independently to select seeds for other clustering algorithms, see sklearn.cluster.kmeans_plusplus for
details and example usage.
The algorithm supports sample weights, which can be given by a parameter sample_weight . This allows to assign more weight to some
samples when computing cluster centers and values of inertia. For example, assigning a weight of 2 to a sample is equivalent to adding a
duplicate of that sample to the dataset .
K-means can be used for vector quantization. This is achieved using the transform method of a trained model of KMeans.
Examples:
Demonstration of k-means assumptions: Demonstrating when k-means performs intuitively and when it does not
A demo of K-Means clustering on the handwritten digits data: Clustering handwritten digits
References:
“k-means++: The advantages of careful seeding” Arthur, David, and Sergei Vassilvitskii, Proceedings of the eighteenth annual ACM-
SIAM symposium on Discrete algorithms, Society for Industrial and Applied Mathematics (2007)
The algorithm iterates between two major steps, similar to vanilla k-means. In the first step, samples are drawn randomly from the
dataset, to form a mini-batch. These are then assigned to the nearest centroid. In the second step, the centroids are updated. In contrast
to k-means, this is done on a per-sample basis. For each sample in the mini-batch, the assigned centroid is updated by taking the
streaming average of the sample and all previous samples assigned to that centroid. This has the effect of decreasing the rate of change
for a centroid over time. These steps are performed until convergence or a predetermined number of iterations is reached.
MiniBatchKMeans converges faster than KMeans, but the quality of the results is reduced. In practice this difference in quality can be quite
small, as shown in the example and cited reference.
Toggle Menu
3 of 24 02-10-2021, 12:40
2.3. Clustering — scikit-learn 1.0 documentation https://scikit-learn.org/stable/modules/clustering.html#affinity-propagation
Examples:
Comparison of the K-Means and MiniBatchKMeans clustering algorithms: Comparison of KMeans and MiniBatchKMeans
Clustering text documents using k-means: Document clustering using sparse MiniBatchKMeans
Online learning of a dictionary of parts of faces
References:
“Web Scale K-Means clustering” D. Sculley, Proceedings of the 19th international conference on World wide web (2010)
Affinity Propagation can be interesting as it chooses the number of clusters based on the data provided. For this purpose, the two impor-
tant parameters are the preference, which controls how many exemplars are used, and the damping factor which damps the responsibility
and availability messages to avoid numerical oscillations when updating these messages.
The main drawback of Affinity Propagation is its complexity. The algorithm has a time complexity of the order , where is the
number of samples and is the number of iterations until convergence. Further, the memory complexity is of the order if a
dense similarity matrix is used, but reducible if a sparse similarity matrix is used. This makes Affinity Propagation most appropriate for
small to medium sized datasets.
Examples:
Demo of affinity propagation clustering algorithm: Affinity Propagation on a synthetic 2D datasets with 3 classes.
Visualizing the stock market structure Affinity Propagation on Financial time series to find groups of companies
Algorithm description: The messages sent between points belong to one of two categories. The first is the responsibility , which
is the accumulated evidence that sample should be the exemplar for sample . The second is the availability which is the accu-
mulated evidence that sample should choose sample to be its exemplar, and considers the values for all other samples that should
be an exemplar. In this way, exemplars are chosen by samples if they are (1) similar enough to many samples and (2) chosen by many
samples to be representative of themselves.
More formally, the responsibility of a sample to be the exemplar of sample is given by:
Toggle Menu
4 of 24 02-10-2021, 12:40
2.3. Clustering — scikit-learn 1.0 documentation https://scikit-learn.org/stable/modules/clustering.html#affinity-propagation
Where is the similarity between samples and . The availability of sample to be the exemplar of sample is given by:
To begin with, all values for and are set to zero, and the calculation of each iterates until convergence. As discussed above, in order to
avoid numerical oscillations when updating the messages, the damping factor is introduced to iteration process:
Given a candidate centroid for iteration , the candidate is updated according to the following equation:
Where is the neighborhood of samples within a given distance around and is the mean shift vector that is computed for
each centroid that points towards a region of the maximum increase in the density of points. This is computed using the following equa-
tion, effectively updating a centroid to be the mean of the samples within its neighborhood:
The algorithm automatically sets the number of clusters, instead of relying on a parameter bandwidth , which dictates the size of the re-
gion to search through. This parameter can be set manually, but can be estimated using the provided estimate_bandwidth function,
which is called if the bandwidth is not set.
The algorithm is not highly scalable, as it requires multiple nearest neighbor searches during the execution of the algorithm. The algo-
rithm is guaranteed to converge, however the algorithm will stop iterating when the change in centroids is small.
Labelling a new sample is performed by finding the nearest centroid for a given sample.
Examples:
A demo of the mean-shift clustering algorithm: Mean Shift clustering on a synthetic 2D datasets with 3 classes.
References:
“Mean shift: A robust approach toward feature space analysis.” D. Comaniciu and P. Meer, IEEE Transactions on Pattern Analysis and
Machine Intelligence (2002)
5 of 24 02-10-2021, 12:40
2.3. Clustering — scikit-learn 1.0 documentation https://scikit-learn.org/stable/modules/clustering.html#affinity-propagation
KMeans, of the components of the eigenvectors in the low dimensional space. It is especially computationally efficient if the affinity ma-
trix is sparse and the amg solver is used for the eigenvalue problem (Note, the amg solver requires that the pyamg module is installed.)
The present version of SpectralClustering requires the number of clusters to be specified in advance. It works well for a small number of
clusters, but is not advised for many clusters.
For two clusters, SpectralClustering solves a convex relaxation of the normalised cuts problem on the similarity graph: cutting the graph
in two so that the weight of the edges cut is small compared to the weights of the edges inside each cluster. This criteria is especially in-
teresting when working on images, where graph vertices are pixels, and weights of the edges of the similarity graph are computed using
a function of a gradient of the image.
Examples:
Spectral clustering for image segmentation: Segmenting objects from a noisy background using spectral clustering.
Segmenting the picture of greek coins in regions: Spectral clustering to split the image of coins in regions.
assign_labels="kmeans" assign_labels="discretize"
>>>
Toggle Menu
6 of 24 02-10-2021, 12:40
2.3. Clustering — scikit-learn 1.0 documentation https://scikit-learn.org/stable/modules/clustering.html#affinity-propagation
References:
“A Tutorial on Spectral Clustering” Ulrike von Luxburg, 2007
“Normalized cuts and image segmentation” Jianbo Shi, Jitendra Malik, 2000
“A Random Walks View of Spectral Segmentation” Marina Meila, Jianbo Shi, 2001
“On Spectral Clustering: Analysis and an algorithm” Andrew Y. Ng, Michael I. Jordan, Yair Weiss, 2001
“Preconditioned Spectral Clustering for Stochastic Block Partition Streaming Graph Challenge” David Zhuzhunashvili, Andrew
Knyazev
The AgglomerativeClustering object performs a hierarchical clustering using a bottom up approach: each observation starts in its own
cluster, and clusters are successively merged together. The linkage criteria determines the metric used for the merge strategy:
Ward minimizes the sum of squared differences within all clusters. It is a variance-minimizing approach and in this sense is similar to
the k-means objective function but tackled with an agglomerative hierarchical approach.
Maximum or complete linkage minimizes the maximum distance between observations of pairs of clusters.
Average linkage minimizes the average of the distances between all observations of pairs of clusters.
Single linkage minimizes the distance between the closest observations of pairs of clusters.
AgglomerativeClustering can also scale to large number of samples when it is used jointly with a connectivity matrix, but is computation-
ally expensive when no connectivity constraints are added between samples: it considers at each step all the possible merges.
FeatureAgglomeration
The FeatureAgglomeration uses agglomerative clustering to group together features that look very similar, thus decreasing the number
of features. It is a dimensionality reduction tool, see Unsupervised dimensionality reduction.
2.3.6.1. Different linkage type: Ward, complete, average, and single linkage
AgglomerativeClustering supports Ward, single, average, and complete linkage strategies.
Toggle Menu
7 of 24 02-10-2021, 12:40
2.3. Clustering — scikit-learn 1.0 documentation https://scikit-learn.org/stable/modules/clustering.html#affinity-propagation
Agglomerative cluster has a “rich get richer” behavior that leads to uneven cluster sizes. In this regard, single linkage is the worst strategy,
and Ward gives the most regular sizes. However, the affinity (or distance used in clustering) cannot be varied with Ward, thus for non
Euclidean metrics, average linkage is a good alternative. Single linkage, while not robust to noisy data, can be computed very efficiently
and can therefore be useful to provide hierarchical clustering of larger datasets. Single linkage can also perform well on non-globular
data.
Examples:
Various Agglomerative Clustering on a 2D embedding of digits: exploration of the different linkage strategies in a real dataset.
Toggle Menu
8 of 24 02-10-2021, 12:40
2.3. Clustering — scikit-learn 1.0 documentation https://scikit-learn.org/stable/modules/clustering.html#affinity-propagation
These constraint are useful to impose a certain local structure, but they also make the algorithm faster, especially when the number of the
samples is high.
The connectivity constraints are imposed via an connectivity matrix: a scipy sparse matrix that has elements only at the intersection of a
row and a column with indices of the dataset that should be connected. This matrix can be constructed from a-priori information: for in-
stance, you may wish to cluster web pages by only merging pages with a link pointing from one to another. It can also be learned from
the data, for instance using sklearn.neighbors.kneighbors_graph to restrict merging to nearest neighbors as in this example, or using
sklearn.feature_extraction.image.grid_to_graph to enable only merging of neighboring pixels on an image, as in the coin example.
Examples:
A demo of structured Ward hierarchical clustering on an image of coins: Ward clustering to split the image of coins in regions.
Hierarchical clustering: structured vs unstructured ward: Example of Ward algorithm on a swiss-roll, comparison of structured ap-
proaches versus unstructured approaches.
Feature agglomeration vs. univariate selection: Example of dimensionality reduction with feature agglomeration based on Ward hi-
erarchical clustering.
Agglomerative clustering with and without structure
l1 distance is often good for sparse features, or sparse noise: i.e. many of the features are zero, as in text mining using occurrences of
rare words.
cosine distance is interesting because it is invariant to global scalings of the signal.
The guidelines for choosing a metric is to use one that maximizes the distance between samples in different classes, and minimizes that
within each class.
Toggle Menu
9 of 24 02-10-2021, 12:40
2.3. Clustering — scikit-learn 1.0 documentation https://scikit-learn.org/stable/modules/clustering.html#affinity-propagation
Examples:
Agglomerative clustering with different metrics
2.3.7. DBSCAN
The DBSCAN algorithm views clusters as areas of high density separated by areas of low density. Due to this rather generic view, clusters
found by DBSCAN can be any shape, as opposed to k-means which assumes that clusters are convex shaped. The central component to
the DBSCAN is the concept of core samples, which are samples that are in areas of high density. A cluster is therefore a set of core sam-
ples, each close to each other (measured by some distance measure) and a set of non-core samples that are close to a core sample (but
are not themselves core samples). There are two parameters to the algorithm, min_samples and eps , which define formally what we mean
when we say dense. Higher min_samples or lower eps indicate higher density necessary to form a cluster.
More formally, we define a core sample as being a sample in the dataset such that there exist min_samples other samples within a dis-
tance of eps , which are defined as neighbors of the core sample. This tells us that the core sample is in a dense area of the vector space.
A cluster is a set of core samples that can be built by recursively taking a core sample, finding all of its neighbors that are core samples,
finding all of their neighbors that are core samples, and so on. A cluster also has a set of non-core samples, which are samples that are
neighbors of a core sample in the cluster but are not themselves core samples. Intuitively, these samples are on the fringes of a cluster.
Any core sample is part of a cluster, by definition. Any sample that is not a core sample, and is at least eps in distance from any core
sample, is considered an outlier by the algorithm.
While the parameter min_samples primarily controls how tolerant the algorithm is towards noise (on noisy and large data sets it may be
desirable to increase this parameter), the parameter eps is crucial to choose appropriately for the data set and distance function and usu-
ally cannot be left at the default value. It controls the local neighborhood of the points. When chosen too small, most data will not be
clustered at all (and labeled as -1 for “noise”). When chosen too large, it causes close clusters to be merged into one cluster, and eventu-
ally the entire data set to be returned as a single cluster. Some heuristics for choosing this parameter have been discussed in the litera-
ture, for example based on a knee in the nearest neighbor distances plot (as discussed in the references below).
In the figure below, the color indicates cluster membership, with large circles indicating core samples found by the algorithm. Smaller cir-
cles are non-core samples that are still part of a cluster. Moreover, the outliers are indicated by black points below.
Examples:
Demo of DBSCAN clustering algorithm
Implementation
The DBSCAN algorithm is deterministic, always generating the same clusters when given the same data in the same order. However,
the results can differ when data is provided in a different order. First, even though the core samples will always be assigned to the
same clusters, the labels of those clusters will depend on the order in which those samples are encountered in the data. Second and
more importantly, the clusters to which non-core samples are assigned can differ depending on the data order. This would happen
when a non-core sample has a distance lower than eps to two core samples in different clusters. By the triangular inequality, those two
core samples must be more distant than eps from each other, or they would be in the same cluster. The non-core sample is assigned
to whichever cluster is generated first in a pass through the data, and so the results will depend on the data ordering.
Toggle Menu
10 of 24 02-10-2021, 12:40
2.3. Clustering — scikit-learn 1.0 documentation https://scikit-learn.org/stable/modules/clustering.html#affinity-propagation
The current implementation uses ball trees and kd-trees to determine the neighborhood of points, which avoids calculating the full
distance matrix (as was done in scikit-learn versions before 0.14). The possibility to use custom metrics is retained; for details, see
NearestNeighbors .
References:
“A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise” Ester, M., H. P. Kriegel, J. Sander, and X.
Xu, In Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining, Portland, OR, AAAI Press, pp.
226–231. 1996
“DBSCAN revisited, revisited: why and how you should (still) use DBSCAN. Schubert, E., Sander, J., Ester, M., Kriegel, H. P., & Xu, X.
(2017). In ACM Transactions on Database Systems (TODS), 42(3), 19.
2.3.8. OPTICS
The OPTICS algorithm shares many similarities with the DBSCAN algorithm, and can be considered a generalization of DBSCAN that relaxes
the eps requirement from a single value to a value range. The key difference between DBSCAN and OPTICS is that the OPTICS algorithm
builds a reachability graph, which assigns each sample both a reachability_ distance, and a spot within the cluster ordering_ attribute;
these two attributes are assigned when the model is fitted, and are used to determine cluster membership. If OPTICS is run with the de-
fault value of inf set for max_eps , then DBSCAN style cluster extraction can be performed repeatedly in linear time for any given eps
value using the cluster_optics_dbscan method. Setting max_eps to a lower value will result in shorter run times, and can be thought of
as the maximum neighborhood radius from each point to find other potential reachable points.
The reachability distances generated by OPTICS allow for variable density extraction of clusters within a single data set. As shown in the
above plot, combining reachability distances and data set ordering_ produces a reachability plot, where point density is represented on
the Y-axis, and points are ordered such that nearby points are adjacent. ‘Cutting’ the reachability plot at a single value produces DBSCAN
like results; all points above the ‘cut’ are classified as noise, and each time that there is a break when reading from left to right signifies a
new cluster. The default cluster extraction with OPTICS looks at the steep slopes within the graph to find clusters, and the user can define
what counts as a steep slope using the parameter xi . There are also other possibilities for analysis on the graph itself, such as generating
hierarchical representations of the data through reachability-plot dendrograms, and the hierarchy of clusters detected by the algorithm
can be accessed through the cluster_hierarchy_ parameter. The plot above has been color-coded so that cluster colors in planar space
match the linear segment clusters of the reachability plot. Note that the blue and red clusters are adjacent in the reachability plot, and
Toggle Menu
11 of 24 02-10-2021, 12:40
2.3. Clustering — scikit-learn 1.0 documentation https://scikit-learn.org/stable/modules/clustering.html#affinity-propagation
Examples:
Demo of OPTICS clustering algorithm
Computational Complexity
Spatial indexing trees are used to avoid calculating the full distance matrix, and allow for efficient memory usage on large sets of sam-
ples. Different distance metrics can be supplied via the metric keyword.
For large datasets, similar (but not identical) results can be obtained via HDBSCAN. The HDBSCAN implementation is multithreaded,
and has better algorithmic runtime complexity than OPTICS, at the cost of worse memory scaling. For extremely large datasets that ex-
haust system memory using HDBSCAN, OPTICS will maintain (as opposed to ) memory scaling; however, tuning of the max_eps
parameter will likely need to be used to give a solution in a reasonable amount of wall time.
References:
“OPTICS: ordering points to identify the clustering structure.” Ankerst, Mihael, Markus M. Breunig, Hans-Peter Kriegel, and Jörg
Sander. In ACM Sigmod Record, vol. 28, no. 2, pp. 49-60. ACM, 1999.
2.3.9. BIRCH
The Birch builds a tree called the Clustering Feature Tree (CFT) for the given data. The data is essentially lossy compressed to a set of
Clustering Feature nodes (CF Nodes). The CF Nodes have a number of subclusters called Clustering Feature subclusters (CF Subclusters)
and these CF Subclusters located in the non-terminal CF Nodes can have CF Nodes as children.
The CF Subclusters hold the necessary information for clustering which prevents the need to hold the entire input data in memory. This
information includes:
The BIRCH algorithm has two parameters, the threshold and the branching factor. The branching factor limits the number of subclusters
in a node and the threshold limits the distance between the entering sample and the existing subclusters.
This algorithm can be viewed as an instance or data reduction method, since it reduces the input data to a set of subclusters which are
obtained directly from the leaves of the CFT. This reduced data can be further processed by feeding it into a global clusterer. This global
clusterer can be set by n_clusters . If n_clusters is set to None, the subclusters from the leaves are directly read off, otherwise a global
clustering step labels these subclusters into global clusters (labels) and the samples are mapped to the global label of the nearest sub-
cluster.
Algorithm description:
A new sample is inserted into the root of the CF Tree which is a CF Node. It is then merged with the subcluster of the root, that has
the smallest radius after merging, constrained by the threshold and branching factor conditions. If the subcluster has any child node,
then this is done repeatedly till it reaches a leaf. After finding the nearest subcluster in the leaf, the properties of this subcluster and
the parent subclusters are recursively updated.
If the radius of the subcluster obtained by merging the new sample and the nearest subcluster is greater than the square of the
threshold and if the number of subclusters is greater than the branching factor, then a space is temporarily allocated to this new sam-
ple.Menu
Toggle The two farthest subclusters are taken and the subclusters are divided into two groups on the basis of the distance between these
12 of 24 02-10-2021, 12:40
2.3. Clustering — scikit-learn 1.0 documentation https://scikit-learn.org/stable/modules/clustering.html#affinity-propagation
subclusters.
If this split node has a parent subcluster and there is room for a new subcluster, then the parent is split into two. If there is no room,
then this node is again split into two and the process is continued recursively, till it reaches the root.
BIRCH or MiniBatchKMeans?
BIRCH does not scale very well to high dimensional data. As a rule of thumb if n_features is greater than twenty, it is generally better
to use MiniBatchKMeans.
If the number of instances of data needs to be reduced, or if one wants a large number of subclusters either as a preprocessing step
or otherwise, BIRCH is more useful than MiniBatchKMeans.
To avoid the computation of global clustering, for every call of partial_fit the user is advised
References:
Tian Zhang, Raghu Ramakrishnan, Maron Livny BIRCH: An efficient data clustering method for large databases.
https://www.cs.sfu.ca/CourseCentral/459/han/papers/zhang96.pdf
Roberto Perdisci JBirch - Java implementation of BIRCH clustering algorithm https://code.google.com/archive/p/jbirch
The Rand index does not ensure to obtain a value close to 0.0 for a random labelling. The adjusted Rand index corrects for chance and
will give such a baseline.
13 of 24 02-10-2021, 12:40
2.3. Clustering — scikit-learn 1.0 documentation https://scikit-learn.org/stable/modules/clustering.html#affinity-propagation
As with all clustering metrics, one can permute 0 and 1 in the predicted labels, rename 2 to 3, and get the same score:
Furthermore, both rand_score adjusted_rand_score are symmetric: swapping the argument does not change the scores. They can thus
be used as consensus measures:
Poorly agreeing labels (e.g. independent labelings) have lower scores, and for the adjusted Rand index the score will be negative or close
to zero. However, for the unadjusted Rand index the score, while lower, will not necessarily be close to zero.:
2.3.10.1.1. Advantages
Interpretability: The unadjusted Rand index is proportional to the number of sample pairs whose labels are the same in both la-
bels_pred and labels_true , or are different in both.
Random (uniform) label assignments have an adjusted Rand index score close to 0.0 for any value of n_clusters and n_samples
(which is not the case for the unadjusted Rand index or the V-measure for instance).
Bounded range: Lower values indicate different labelings, similar clusterings have a high (adjusted or unadjusted) Rand index, 1.0 is
the perfect match score. The score range is [0, 1] for the unadjusted Rand index and [-1, 1] for the adjusted Rand index.
No assumption is made on the cluster structure: The (adjusted or unadjusted) Rand index can be used to compare all kinds of clus-
tering algorithms, and can be used to compare clustering algorithms such as k-means which assumes isotropic blob shapes with re-
sults of spectral clustering algorithms which can find cluster with “folded” shapes.
2.3.10.1.2. Drawbacks
Contrary to inertia, the (adjusted or unadjusted) Rand index requires knowledge of the ground truth classes which is almost
never available in practice or requires manual assignment by human annotators (as in the supervised learning setting).
However (adjusted or unadjusted) Rand index can also be useful in a purely unsupervised setting as a building block for a
Consensus Index that can be used for clustering model selection (TODO).
The unadjusted Rand index is often close to 1.0 even if the clusterings themselves differ significantly. This can be understood
when interpreting the Rand index as the accuracy of element pair labeling resulting from the clusterings: In practice there often is a
majority of element pairs that are assigned the different pair label under both the predicted and the ground truth clustering re-
sulting in a high proportion of pair labels that agree, which leads subsequently to a high score.
Examples:
Adjustment for chance in clustering performance evaluation: Analysis of the impact of the dataset size on the value of clustering
measures for random assignments.
, the
Toggle number of pairs of elements that are in the same set in C and in the same set in K
Menu
14 of 24 02-10-2021, 12:40
2.3. Clustering — scikit-learn 1.0 documentation https://scikit-learn.org/stable/modules/clustering.html#affinity-propagation
, the number of pairs of elements that are in different sets in C and in different sets in K
where is the total number of possible pairs in the dataset. It does not matter if the calculation is performed on ordered pairs or
unordered pairs as long as the calculation is performed consistently.
However, the Rand index does not guarantee that random label assignments will get a value close to zero (esp. if the number of clusters
is in the same order of magnitude as the number of samples).
To counter this effect we can discount the expected RI of random labelings by defining the adjusted Rand index as follows:
References
Comparing Partitions L. Hubert and P. Arabie, Journal of Classification 1985
Properties of the Hubert-Arabie adjusted Rand index D. Steinley, Psychological Methods 2004
Wikipedia entry for the Rand index
Wikipedia entry for the adjusted Rand index
One can permute 0 and 1 in the predicted labels, rename 2 to 3 and get the same score:
All, mutual_info_score, adjusted_mutual_info_score and normalized_mutual_info_score are symmetric: swapping the argument does not
change the score. Thus they can be used as a consensus measure:
>>>
Toggle Menu
15 of 24 02-10-2021, 12:40
2.3. Clustering — scikit-learn 1.0 documentation https://scikit-learn.org/stable/modules/clustering.html#affinity-propagation
2.3.10.2.1. Advantages
Random (uniform) label assignments have a AMI score close to 0.0 for any value of n_clusters and n_samples (which is not the
case for raw Mutual Information or the V-measure for instance).
Upper bound of 1: Values close to zero indicate two label assignments that are largely independent, while values close to one indi-
cate significant agreement. Further, an AMI of exactly 1 indicates that the two label assignments are equal (with or without permuta-
tion).
2.3.10.2.2. Drawbacks
Contrary to inertia, MI-based measures require the knowledge of the ground truth classes while almost never available in prac-
tice or requires manual assignment by human annotators (as in the supervised learning setting).
However MI-based measures can also be useful in purely unsupervised setting as a building block for a Consensus Index that can
be used for clustering model selection.
Examples:
Adjustment for chance in clustering performance evaluation: Analysis of the impact of the dataset size on the value of clustering
measures for random assignments. This example also includes the Adjusted Rand Index.
where is the probability that an object picked at random from falls into class . Likewise for :
where is the probability that an object picked at random falls into both classes and .
This value of the mutual information and also the normalized variant is not adjusted for chance and will tend to increase as the number
of different labels (clusters) increases, regardless of the actual amount of “mutual information” between the label assignments.
The expected value for the mutual information can be calculated using the following equation [VEB2009]. In this equation, (the
number of elements in ) and (the number of elements in ).
Toggle Menu
16 of 24 02-10-2021, 12:40
2.3. Clustering — scikit-learn 1.0 documentation https://scikit-learn.org/stable/modules/clustering.html#affinity-propagation
Using the expected value, the adjusted mutual information can then be calculated using a similar form to that of the adjusted Rand index:
For normalized mutual information and adjusted mutual information, the normalizing value is typically some generalized mean of the en-
tropies of each clustering. Various generalized means exist, and no firm rules exist for preferring one over the others. The decision is
largely a field-by-field basis; for instance, in community detection, the arithmetic mean is most common. Each normalizing method pro-
vides “qualitatively similar behaviours” [YAT2016]. In our implementation, this is controlled by the average_method parameter.
Vinh et al. (2010) named variants of NMI and AMI by their averaging method [VEB2010]. Their ‘sqrt’ and ‘sum’ averages are the geometric
and arithmetic means; we use these more broadly common names.
References
Strehl, Alexander, and Joydeep Ghosh (2002). “Cluster ensembles – a knowledge reuse framework for combining multiple parti-
tions”. Journal of Machine Learning Research 3: 583–617. doi:10.1162/153244303321897735.
Wikipedia entry for the (normalized) Mutual Information
Wikipedia entry for the Adjusted Mutual Information
[VEB2009] Vinh, Epps, and Bailey, (2009). “Information theoretic measures for clusterings comparison”. Proceedings of the 26th
Annual International Conference on Machine Learning - ICML ‘09. doi:10.1145/1553374.1553511. ISBN 9781605585161.
[VEB2010] Vinh, Epps, and Bailey, (2010). “Information Theoretic Measures for Clusterings Comparison: Variants, Properties,
Normalization and Correction for Chance”. JMLR <http://jmlr.csail.mit.edu/papers/volume11/vinh10a/vinh10a.pdf>
[YAT2016] Yang, Algesheimer, and Tessone, (2016). “A comparative analysis of community detection algorithms on artificial networks”.
Scientific Reports 6: 30750. doi:10.1038/srep30750.
In particular Rosenberg and Hirschberg (2007) define the following two desirable objectives for any cluster assignment:
We can turn those concept as scores homogeneity_score and completeness_score. Both are bounded below by 0.0 and above by 1.0
(higher is better):
beta defaults to a value of 1.0, but for using a value less than 1 for beta:
more weight will be attributed to homogeneity, and using a value greater than 1:
17 of 24 02-10-2021, 12:40
2.3. Clustering — scikit-learn 1.0 documentation https://scikit-learn.org/stable/modules/clustering.html#affinity-propagation
The V-measure is actually equivalent to the mutual information (NMI) discussed above, with the aggregation function being the arith-
metic mean [B2011].
Homogeneity, completeness and V-measure can be computed at once using homogeneity_completeness_v_measure as follows:
The following clustering assignment is slightly better, since it is homogeneous but not complete:
Note: v_measure_score is symmetric: it can be used to evaluate the agreement of two independent assignments on the same
dataset.
This is not the case for completeness_score and homogeneity_score: both are bound by the relationship:
homogeneity_score(a, b) == completeness_score(b, a)
2.3.10.3.1. Advantages
Bounded scores: 0.0 is as bad as it can be, 1.0 is a perfect score.
Intuitive interpretation: clustering with bad V-measure can be qualitatively analyzed in terms of homogeneity and completeness
to better feel what ‘kind’ of mistakes is done by the assignment.
No assumption is made on the cluster structure: can be used to compare clustering algorithms such as k-means which assumes
isotropic blob shapes with results of spectral clustering algorithms which can find cluster with “folded” shapes.
2.3.10.3.2. Drawbacks
The previously introduced metrics are not normalized with regards to random labeling: this means that depending on the num-
ber of samples, clusters and ground truth classes, a completely random labeling will not always yield the same values for homo-
geneity, completeness and hence v-measure. In particular random labeling won’t yield zero scores especially when the number
of clusters is large.
This problem can safely be ignored when the number of samples is more than a thousand and the number of clusters is less than
10. For smaller sample sizes or larger number of clusters it is safer to use an adjusted index such as the Adjusted Rand Index
(ARI).
Toggle Menu
These metrics require the knowledge of the ground truth classes while almost never available in practice or requires manual as-
18 of 24 02-10-2021, 12:40
2.3. Clustering — scikit-learn 1.0 documentation https://scikit-learn.org/stable/modules/clustering.html#affinity-propagation
Examples:
Adjustment for chance in clustering performance evaluation: Analysis of the impact of the dataset size on the value of clustering
measures for random assignments.
where is the conditional entropy of the classes given the cluster assignments and is given by:
with the total number of samples, and the number of samples respectively belonging to class and cluster , and finally the
number of samples from class assigned to cluster .
The conditional entropy of clusters given class and the entropy of clusters are defined in a symmetric manner.
Rosenberg and Hirschberg further define V-measure as the harmonic mean of homogeneity and completeness:
References
V-Measure: A conditional entropy-based external cluster evaluation measure Andrew Rosenberg and Julia Hirschberg, 2007
[B2011] Identication and Characterization of Events in Social Media, Hila Becker, PhD Thesis.
Where TP is the number of True Positive (i.e. the number of pair of points that belong to the same clusters in both the true labels and
the predicted labels), FP is the number of False Positive (i.e. the number of pair of points that belong to the same clusters in the true la-
bels and not in the predicted labels) and FN is the number of False Negative (i.e the number of pair of points that belongs in the same
clusters in the predicted labels and not in the true labels).
The score ranges from 0 to 1. A high value indicates a good similarity between two clusters.
One can permute 0 and 1 in the predicted labels, rename 2 to 3 and get the same score:
19 of 24 02-10-2021, 12:40
2.3. Clustering — scikit-learn 1.0 documentation https://scikit-learn.org/stable/modules/clustering.html#affinity-propagation
2.3.10.4.1. Advantages
Random (uniform) label assignments have a FMI score close to 0.0 for any value of n_clusters and n_samples (which is not the
case for raw Mutual Information or the V-measure for instance).
Upper-bounded at 1: Values close to zero indicate two label assignments that are largely independent, while values close to one indi-
cate significant agreement. Further, values of exactly 0 indicate purely independent label assignments and a FMI of exactly 1 indicates
that the two label assignments are equal (with or without permutation).
No assumption is made on the cluster structure: can be used to compare clustering algorithms such as k-means which assumes
isotropic blob shapes with results of spectral clustering algorithms which can find cluster with “folded” shapes.
2.3.10.4.2. Drawbacks
Contrary to inertia, FMI-based measures require the knowledge of the ground truth classes while almost never available in prac-
tice or requires manual assignment by human annotators (as in the supervised learning setting).
References
E. B. Fowkles and C. L. Mallows, 1983. “A method for comparing two hierarchical clusterings”. Journal of the American Statistical
Association. https://www.tandfonline.com/doi/abs/10.1080/01621459.1983.10478008
Wikipedia entry for the Fowlkes-Mallows Index
a: The mean distance between a sample and all other points in the same class.
b: The mean distance between a sample and all other points in the next nearest cluster.
The Silhouette Coefficient for a set of samples is given as the mean of the Silhouette Coefficient for each sample.
In normal usage, the Silhouette Coefficient is applied to the results of a cluster analysis.
Toggle Menu
20 of 24 02-10-2021, 12:40
2.3. Clustering — scikit-learn 1.0 documentation https://scikit-learn.org/stable/modules/clustering.html#affinity-propagation
References
Peter J. Rousseeuw (1987). “Silhouettes: a Graphical Aid to the Interpretation and Validation of Cluster Analysis”. Computational and
Applied Mathematics 20: 53–65. doi:10.1016/0377-0427(87)90125-7.
2.3.10.5.1. Advantages
The score is bounded between -1 for incorrect clustering and +1 for highly dense clustering. Scores around zero indicate overlapping
clusters.
The score is higher when clusters are dense and well separated, which relates to a standard concept of a cluster.
2.3.10.5.2. Drawbacks
The Silhouette Coefficient is generally higher for convex clusters than other concepts of clusters, such as density based clusters like
those obtained through DBSCAN.
Examples:
Selecting the number of clusters with silhouette analysis on KMeans clustering : In this example the silhouette analysis is used to
choose an optimal value for n_clusters.
The index is the ratio of the sum of between-clusters dispersion and of within-cluster dispersion for all clusters (where dispersion is de-
fined as the sum of distances squared):
In normal usage, the Calinski-Harabasz index is applied to the results of a cluster analysis:
2.3.10.6.1. Advantages
The score is higher when clusters are dense and well separated, which relates to a standard concept of a cluster.
The score is fast to compute.
2.3.10.6.2. Drawbacks
The Calinski-Harabasz index is generally higher for convex clusters than other concepts of clusters, such as density based clusters like
those obtained through DBSCAN.
where is trace of the between group dispersion matrix and is the trace of the within-cluster dispersion matrix defined by:
Toggle Menu
21 of 24 02-10-2021, 12:40
2.3. Clustering — scikit-learn 1.0 documentation https://scikit-learn.org/stable/modules/clustering.html#affinity-propagation
with the set of points in cluster , the center of cluster , the center of , and the number of points in cluster .
References
Caliński, T., & Harabasz, J. (1974). “A Dendrite Method for Cluster Analysis”. Communications in Statistics-theory and Methods 3:
1-27. doi:10.1080/03610927408827101.
This index signifies the average ‘similarity’ between clusters, where the similarity is a measure that compares the distance between clus-
ters with the size of the clusters themselves.
Zero is the lowest possible score. Values closer to zero indicate a better partition.
In normal usage, the Davies-Bouldin index is applied to the results of a cluster analysis as follows:
2.3.10.7.1. Advantages
The computation of Davies-Bouldin is simpler than that of Silhouette scores.
The index is computed only quantities and features inherent to the dataset.
2.3.10.7.2. Drawbacks
The Davies-Boulding index is generally higher for convex clusters than other concepts of clusters, such as density based clusters like
those obtained from DBSCAN.
The usage of centroid distance limits the distance metric to Euclidean space.
, the average distance between each point of cluster and the centroid of that cluster – also know as cluster diameter.
, the distance between cluster centroids and .
References
Davies, David L.; Bouldin, Donald W. (1979). “A Cluster Separation Measure” IEEE Transactions on Pattern Analysis and Machine
Intelligence. PAMI-1 (2): 224-227. doi:10.1109/TPAMI.1979.4766909.
Halkidi, Maria; Batistakis, Yannis; Vazirgiannis, Michalis (2001). “On Clustering Validation Techniques” Journal of Intelligent
Information Systems, 17(2-3), 107-145. doi:10.1023/A:1012801612483.
Wikipedia entry for Davies-Bouldin index.
Toggle Menu
22 of 24 02-10-2021, 12:40
2.3. Clustering — scikit-learn 1.0 documentation https://scikit-learn.org/stable/modules/clustering.html#affinity-propagation
Here is an example:
The first row of output array indicates that there are three samples whose true cluster is “a”. Of them, two are in predicted cluster 0, one
is in 1, and none is in 2. And the second row indicates that there are three samples whose true cluster is “b”. Of them, none is in predicted
cluster 0, one is in 1 and two are in 2.
A confusion matrix for classification is a square contingency matrix where the order of rows and columns correspond to a list of classes.
2.3.10.8.1. Advantages
Allows to examine the spread of each true cluster across predicted clusters and vice versa.
The contingency table calculated is typically utilized in the calculation of a similarity statistic (like the others listed in this document)
between the two clusterings.
2.3.10.8.2. Drawbacks
Contingency matrix is easy to interpret for a small number of clusters, but becomes very hard to interpret for a large number of clus-
ters.
It doesn’t give a single metric to use as an objective for clustering optimisation.
References
Wikipedia entry for contingency matrix
between two clusterings computed by considering all pairs of samples and counting pairs that are assigned into the same or into differ-
ent clusters under the true and predicted clusterings.
: number of pairs with both clusterings having the samples not clustered together
: number of pairs with the true label clustering having the samples clustered together but the other clustering not having the sam-
ples clustered together
: number of pairs with the true label clustering not having the samples clustered together but the other clustering having the sam-
ples clustered together
: number of pairs with both clusterings having the samples clustered together
Considering a pair of samples that is clustered together a positive pair, then as in binary classification the count of true negatives is ,
false negatives is , true positives is and false positives is .
Perfectly matching labelings have all non-zero entries on the diagonal regardless of actual label values:
23 of 24 02-10-2021, 12:40
2.3. Clustering — scikit-learn 1.0 documentation https://scikit-learn.org/stable/modules/clustering.html#affinity-propagation
Labelings that assign all classes members to the same clusters are complete but may not always be pure, hence penalized, and have
some off-diagonal non-zero entries:
If classes members are completely split across different clusters, the assignment is totally incomplete, hence the matrix has all zero diago-
nal entries:
References
L. Hubert and P. Arabie, Comparing Partitions, Journal of Classification 1985 <https://link.springer.com/article
/10.1007%2FBF01908075>_
© 2007 - 2021, scikit-learn developers (BSD License). Show this page source
Toggle Menu
24 of 24 02-10-2021, 12:40