@@ -210,7 +210,7 @@ the :class:`KMeans` algorithm.
210
210
211
211
.. _affinity_propagation :
212
212
213
- Affinity propagation
213
+ Affinity Propagation
214
214
====================
215
215
216
216
:class: `AffinityPropagation ` creates clusters by sending messages between
@@ -222,7 +222,7 @@ values from other pairs. This updating happens iteratively until convergence,
222
222
at which point the final exemplars are chosen, and hence the final clustering
223
223
is given.
224
224
225
- Affinity Propogation has a number of advantages over other algorithms. In many
225
+ Affinity Propagation has a number of advantages over other algorithms. In many
226
226
experiments it is shown to produce a lower error than other algorithms,
227
227
specifically k-means, but it also works without any parameters, choosing the
228
228
number of clusters based on the data provided.
@@ -253,12 +253,12 @@ availability of sample `k` to be the exemplar of sample `i` is given by:
253
253
To begin with, all values for `r ` and `a ` are set to zero, and the calculation
254
254
of each iterates until convergence.
255
255
256
- While effective, Affinity Propogation has some disadvantages. The most pressing
256
+ While effective, Affinity Propagation has some disadvantages. The most pressing
257
257
is its complexity. The algorithm has a time complexity of the order
258
258
:math: `O(N^2 T)`, where `N ` is the number of samples and `T ` is the number of
259
259
iterations until convergence. Further, the space complexity is of the order
260
- :math: `O(N^2 )` if a dense similarity matrix is used, but reducable if a sparse
261
- similarity matrix is used. This makes Affinity Propogation most appropriate for
260
+ :math: `O(N^2 )` if a dense similarity matrix is used, but reducible if a sparse
261
+ similarity matrix is used. This makes Affinity Propagation most appropriate for
262
262
small to medium sized datasets.
263
263
264
264
.. figure :: ../auto_examples/cluster/images/plot_affinity_propagation_1.png
0 commit comments