100% found this document useful (1 vote)
47 views33 pages

Hierarchical Clustering: Class Program University Semester Lecturer Sources

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 33

Hierarchical Clustering

Class Algorithmic Methods of Data Mining


Program M. Sc. Data Science
University Sapienza University of Rome
Semester Fall 2015
Lecturer Carlos Castillo http://chato.cl/

Sources:
● Mohammed J. Zaki, Wagner Meira, Jr., Data Mining and Analysis:

Fundamental Concepts and Algorithms, Cambridge University


Press, May 2014. Chapter 14. [download]
● Evimaria Terzi: Data Mining course at Boston University

http://www.cs.bu.edu/~evimaria/cs565-13.html

1
http://www.talkorigins.org/faqs/comdesc/phylo.html 2
http://www.chegg.com/homework-help/questions-and-answers/part-phylogenetic-tree-shown-figure-b-sines-10-12-13-first-insert-genomes-artiodactyls-phy-q4932026
3
Hierarchical Clustering
• Produces a set of nested clusters organized as
a hierarchical tree
• Can be visualized as a dendrogram
– A tree-like diagram that records the sequences of
merges or splits
Strengths of Hierarchical
Clustering
• No assumptions on the number of clusters
– Any desired number of clusters can be obtained by
‘cutting’ the dendogram at the proper level

• Hierarchical clusterings may correspond to


meaningful taxonomies
– Example in biological sciences (e.g., phylogeny
reconstruction, etc), web (e.g., product catalogs) etc
Hierarchical Clustering
Algorithms
• Two main types of hierarchical clustering
– Agglomerative:
• Start with the points as individual clusters
• At each step, merge the closest pair of clusters until only one cluster
(or k clusters) left

– Divisive:
• Start with one, all-inclusive cluster
• At each step, split a cluster until each cluster contains a point (or
there are k clusters)

• Traditional hierarchical algorithms use a similarity or


distance matrix
– Merge or split one cluster at a time
Complexity of hierarchical
clustering

• Distance matrix is used for deciding which


clusters to merge/split

• At least quadratic in the number of data points

• Not usable for large datasets


Agglomerative clustering algorithm

• Most popular hierarchical clustering technique


• Basic algorithm:
Compute the distance matrix between the input data points
Let each data point be a cluster
Repeat
Merge the two closest clusters
Update the distance matrix
Until only a single cluster remains

Key operation is the computation of the distance


between two clusters
Different definitions of the distance between clusters lead
to different algorithms
Input/ Initial setting
• Start with clusters of individual points
and a distance/proximity matrix
p1 p2 p3 p4 p5 ...
p1

p2

p3

p4
p5
.
.
.

Distance/Proximity Matrix
Intermediate State
• After some merging steps, we have some clusters
C1 C2 C3 C4 C5

C1

C2
C3
C3
C4
C4

C5
C1
Distance/Proximity Matrix

C2 C5
Intermediate State
• Merge the two closest clusters (C2 and C5) and update the
distance matrix.
C1 C2 C3 C4 C5

C1

C3 C2

C3
C4
C4

C5
C1
Distance/Proximity Matrix

C2 C5
After Merging
• “How do we update the distance matrix?”
C2 U
C5
C1 C3 C4

C3 C1 ?

C4 C2 U C5 ? ? ? ?

C3 ?

C1 C4 ?

C2 U C5
Distance between two
clusters
• Each cluster is a set of points

• How do we define distance between


two sets of points
– Lots of alternatives
– Not an easy task
Distance between two
clusters
• Single-link distance between clusters Ci and
Cj is the minimum distance between any
object in Ci and any object in Cj

• The distance is defined by the two most


similar objects
Single-link clustering:
example
• Determined by one pair of points, i.e., by one
link in the proximity graph.

1 2 3 4 5
Single-link clustering: example

5
1
3

5
2 1
2 3 6

4
4

Nested Clusters Dendrogram


Exercise: 1-dimensional clustering

5 11 13 16 25 36 38 39 42 60 62 64 67

Exercise:
Create a hierarchical agglomerative clustering for this data.
To make this deterministic, if there are ties, pick the left-most link.

Verify: clustering with 4 clusters has 25 as singleton.

http://chato.cl/2015/data-analysis/exercise-answers/hierarchical-clustering_exercise_01_answer.txt 17
Strengths of single-link clustering

Original Points Two Clusters

• Can handle non-elliptical shapes


Limitations of single-link clustering

Original Points Two Clusters

• Sensitive to noise and outliers


• It produces long, elongated clusters
Distance between two
clusters
• Complete-link distance between clusters Ci
and Cj is the maximum distance between any
object in Ci and any object in Cj

• The distance is defined by the two most


dissimilar objects
Complete-link clustering: example

• Distance between clusters is determined by the


two most distant points in the different clusters

1 2 3 4 5
Complete-link clustering: example

4 1
2 5
5
2

3 6
3
1
4

Nested Clusters Dendrogram


Strengths of complete-link
clustering

Original Points Two Clusters

• More balanced clusters (with equal diameter)


• Less susceptible to noise
Limitations of complete-link
clustering

Original Points Two Clusters

• Tends to break large clusters


• All clusters tend to have the same diameter – small
clusters are merged with larger ones
Distance between two
clusters
• Group average distance between clusters Ci
and Cj is the average distance between any
object in Ci and any object in Cj
Average-link clustering:
example
• Proximity of two clusters is the average of
pairwise proximity between points in the two
clusters.

1 2 3 4 5
Average-link clustering: example

5 4 1
2
5
2

3 6
1
4
3

Nested Clusters Dendrogram


Average-link clustering:
discussion
• Compromise between Single and
Complete Link

• Strengths
– Less susceptible to noise and outliers

• Limitations
– Biased towards globular clusters
Distance between two
clusters
• Centroid distance between clusters Ci and Cj is
the distance between the centroid ri of Ci and the
centroid rj of Cj
Distance between two
clusters
• Ward’s distance between clusters Ci and Cj is the
difference between the total within cluster sum of
squares for the two clusters separately, and the
within cluster sum of squares resulting from
merging the two clusters in cluster Cij

• ri: centroid of Ci
• rj: centroid of Cj
• rij: centroid of Cij
Ward’s distance for
clusters
• Similar to group average and centroid distance

• Less susceptible to noise and outliers

• Biased towards globular clusters

• Hierarchical analogue of k-means


– Can be used to initialize k-means
Hierarchical Clustering: Comparison
5
1 4 1
3
2 5
5 5
2 1 2
MIN MAX
2 3 6 3 6
3
1
4 4
4

5
1 5 4 1
2 2
5 Ward’s Method 5
2 2
3 6 Group Average 3 6
3
4 1 1
4 4
3
Hierarchical Clustering: Time and Space
requirements

• For a dataset X consisting of n points

• O(n2) space; it requires storing the distance matrix

• O(n3) time in most of the cases


– There are n steps and at each step the size n2 distance
matrix must be updated and searched

– Complexity can be reduced to O(n2 log(n) ) time for


some approaches by using appropriate data structures

You might also like