10-601 Machine Learning (Fall 2010) Principal Component Analysis

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

10-601 Machine Learning (Fall 2010) Principal Component Analysis

Yang Xu
This note is partly based on Chapter 12.1 in Chris Bishops book on PRML and the lecture slides on PCA written by Carlos Guestrin in the 10-701 Machine Learning (fall 2009) course.

In a nutshell

Principal component analysis or PCA, in essence, is a linear projection operator that maps a variable of interest to a new coordinate frame where the axes represent maximal variability. Expressed mathematically, PCA transforms an input data matrix X (N D, N being the number of points, D being the dimension of data) to an output Y (N D , often D D) via the following Y = XP

(1)

where P (D D ) is the projection matrix of which each column is a principal component (PC)these are unit vectors that bear orthogonal directions. PCA is a handy tool for dimension reduction, latent concept discovery, data visualization and compression, or data preprocessing in general.

Why do we care

One could think of many reasons where transforming a data set at hand to a lowdimensional space might be desirable, e.g. it makes the data easier to manipulate with and requires less computational resource. It is, however, important to perform such transformations in a principled way because any kind of dimension reduction might lead to loss of information, and it is crucial that the algorithm preserves the useful part of the data while discarding the noise. Here we motivate PCA from three perspectives and explain why preserving maximal variability makes sense.

2.1

Correlation and redundancy

Suppose you did a survey on height and weight of your classmates and learnt that these were roughly correlated in that taller individuals tend to be heavier

9 8 7 6
height

5 4 3 2 1 0 0 50 100 150
weight

200

250

300

Figure 1: Height vs weight. and vice versa. You decided to plot these on a graph where each individual is represented in the coordinates (height,weight) as illustrated in Figure 1. Immediately you would nd that by tilting these two axes approximately 45 degrees you could capture most of the variability along a single axis. In fact, if the heights and weights were perfectly correlated (e.g. they form a line) you could discard one of the two tilted axes while still capturing the full distribution. In other words, if an algorithm nds a rotation angle of the axes such that maximal variability is preserved, it may help one gure out where correlation lies and which axes to drop, removing redundancies in the dataPCA does exactly this, and more so, it tells you how much variability the rotated axes tend to capture.

2.2

Synergy

Correlations also imply synergies. When synergy exists among a mass of dimensions, dimension reduction becomes extremely ecientone could represent the mass of dimensions with just a handful. For example, imagine 20 violinists performing in unison in an orchestra. Assuming only a few yawns or does something completely obeat, the movement of the ensemble 20 violinists could well be characterized as a synergic whole (i.e. a single dimension suces to represent 20 dimensions). In other words, synergic performing hands (motions) dominate the representation, whereas noise factors such as yawning can be largely ignored. As an another example, when you grasp an object, the joint angles of your ngers tend to curl in synergy. Believe it or not, PCA could capture such synergies too by nding the axis that explains most variance of the ensemble joint motions.

1000 500 0 2 1000 500 0 1 1000 500 0 4 1000 500 0 2 1000 500 0 4 2 0
dimension 5 variance explained

3 2 1 0
dimension 1

2
PC 2

1 0 1

1.5

2
dimension 2

2.5

2 3 5 0
PC 1

0
dimension 3

0.8

0.6

0
dimension 4

0.4

0.2

3
PC

Figure 2: Visualizing low-dimensional data.

2.3

Visualization

It is often sensible to get a feel for the data before hammering any machine learning algorithms on them. Figure 2 demonstrates a 5-dimensional data set where it is dicult to gure out the underlying structure by looking merely at the scatter plot (note that plotting pairwise dimensions would help visualizing the correlations). Applying PCA, however, allows one to discover that the embedded structure is a circle. Note that only the rst 2 principal components contain signicant variability, suggesting that 3 out of the 5 transformed dimensions could be discarded without much loss of information.

A preemptive little exercise

Assuming that all PCA does is nding a projection (or rotation) matrix where along the rotated axes maximal variances of the data are preserved, what would you predict about the columns in matrix P in Equation 1 if we were to apply PCA to the data in Figure 1in other words, can you think of two perpendicular unit vectors that rotate the height-weight axes in such a way where maximal variances are captured (hint: 45 might be a good rotating angle)?

1 1 1 1 Answer: P1 [ 2 , 2 ]T , P2 [ 2 , 2 ]T . This should be quite intuitive if we assume the desirable tilting angle of the axes is roughly 45 degrees, the corresponding projections are then [1, 1] and [1, 1] (or [1, 1]). Making these unit rotational vectors, we would normalize them by (|1|2 + |1|2 = 2 resulting in P1 and P2 . Note that P1 is what we call the rst principal component that captures the most variability, P2 is the second principal component that retains residual maximal variability in an orthogonal direction to P1 if height and weight share perfect correlation, the variance along P2 direction would be zero. If you dont nd this example intuitive, no worries and read on.

How is it derived

There are a number of ways to derive PCA. Here we focus on the maximalvariance principle and show that the resulting optimization problem boils down to eigendecomposition of the covariance matrix. Recall the input data matrix of N points is X = [x1 , ..., xN ]T where each x is a D-dimensional vector. PCA nds a projection matrix P = [p1 , ..., pD ]T that maps each point to a low-dimensional space (D D). As described, each p is a basis vector that maximizes the variance of X in orthogonal directions with respect to each other, and that the amount of variance preserved decreases from p1 to pD . Here we derive the rst principal component p1 , although any high-order component can be derived similarly via induction. By denition, the covariance matrix of X is C=
N

1 N

n=1

(xn )(xn )T

(2)

1 where = N n=1 xn is the mean. The resulting variance by projecting the data onto p1 is simply

v =

1 N

n=1

(pT xn pT )2 = pT Cp1 . 1 1 1

(3)

Note that v is a scalar. PCA seeks to nd p1 that maximizes this quantity under the constraint that p1 is a unit vector (i.e. the projection should be purely rotational without any scaling) p1 max F = pT Cp1 + 1 (1 pT p1 ) 1 1 (4)

where the term associated with is the Langrange multiplier that enforces the unit vector constraint. Dierentiating F with respect to p1 and setting the derivative to zero yields the condition for optimal projection

dF dp1 Cp1

= 0 = 1 p1 . (5)

For those that are familiar with linear algebra, Equation 5 is identical to an eigendecomposition of matrix C where p1 is the eigenvector and 1 is the eigenvalue (i.e. solving for det(C 1 I) = 0 and substituting 1 into (C 1 I)p1 = 0 for p1 ;). (exercise: perform an eigendecomposition on a simple matrix [2 3; 0 1].) Thus nding principal components is equivalent in solving an eigendecomposition problem for the covariance matrix C. Note that this connection is intuitive because eigenvalues represent the magnitude of a matrix projected onto the corresponding eigenvectorshere the eigenvectors are the projection of the data covariance and the eigenvalues are the resulting variances from projection. If we repeat this process we would obtain p2 , ..., pD and 2 , ..., D (the maximal D is D assuming the number of samples N is greater than the dimension D). Following eigendecomposition, the covariance matrix C can be expressed as follows (assuming D = D, i.e. P is a full projection matrix) C = PPT (6)

where is a diagonal matrix with elements {1 , 2 , ..., D } and 1 2 ... D . Here each column in P is a principal component and each corresponding indicates the variance explained by projecting the data onto that component.

Singular value decomposition


It turns out that PCA falls under a general method for matrix decomposition called singular value decomposition (SVD). The idea behind SVD is to factor an arbitrary matrix (X of size N D) into the following X = UVT (7) where U = [u1 , ..., uD ] and V = [v1 , ..., vD ] are orthornormal bases for the column and row spaces of X, and is a diagonal matrix with diagonal elements {1 , ..., D }. Another way of explaining SVD is that we wish to nd a mapping between bases in the column space and those in the row space i ui = Xvi (i = 1, ..., D) (8)

where s here can be understood as stretch factors that help to match us with vs. Equation 8 can be expressed then expressed in matrix form which yields Equation 7

U = X = X =

XV UV1 UVT . (9)

The magic begins when we derive the covariance of X, assuming it is the input data matrix as in the case of PCA XT X = = = (UVT )T UVT VT UT UVT V2 VT . (10)

2 Now compare Equation 10 with Equation 6they are identical only that i = i (i = 1, ..., D)! In other words, performing SVD is equivalent to PCA where the eigenvectors or principal components are found in V. So what is the extra gain of SVD? Note that SVD simultaneously nd the eigenvectors for XT X (in V) and XXT (in U; try work this out yourself). In cases where we have more dimensions than data points, i.e. D N , it is often convenient to decompose XXT (N N ) instead of the covariance matrix (D D) to save computation.

How is it used

The primary use of PCA is to reduce the dimension of datagiven D-dimensional data x (D 1), PCA maps it to y = PT x with a lower dimension D . This reduction is often achieved by truncating the columns in the projection matrix P based on the amount of variance that one desires to retain. To be concrete, remember that contains in its diagonal the eigenvalues in decreasing order, and the fractional variance accounted for say with the rst M s (1 , ..., M ) is simply their sum relative to the sum of all s. Suppose we wish to retain as much as 95% variance, then we would keep the number of columns in P such that the fractional sum of their corresponding values is at least 0.95 and discard any remaining columns (note that the fractional sum of the remaining s is only 0.05, implying they might just be noiseat least that is the hope). Eectively what this means is that we have set s with extremely small values to zero in lets call the resulting diagonal matrix . The truncated covariance matrix C = P PT would then have a lower rank than the original C, hence C is a low-rank approximate of C. Truncation is quite an art itself although there are cases where this is obvious. If the data has an intrinsic low-dimensional representation, e.g. the circle example in Figure 2, then it is likely that only a few s would take signicantly large values. In the extreme case, however, imagine that the values decrease very slowly (e.g. a near-uniform distribution), then it becomes dicult to determine 6

10 class 1 class 2

X2
0

LDA
5 5 0

PCA
5

Figure 3: PCA vs LDA. where to truncate if one should truncate at all. PCA is particularly useful if one has a limited amount of data (e.g. images). Once the data is transformed to a low-dimensional space, one could do whatever is applicable to the original data set only now that the data is easier to handle with, e.g. regression, classication, etc. The limitation of a PCA-based regression is that it is usually hard to interpret the transformed dimensions (or covariates). For example, suppose we map the height-weight coordinates in Figure 1 to a one-dimensional space by tilting the axes 45 degrees, what would you interpret this new dimension asit is dicult to directly associate it with an explicit physical meaning such as height and weight. In the next section, we discuss further the pitfall of PCA in classication.

When does it go wrong

Throughout we have doctrinated the idea that dimension reduction by maximizing variability is a sensible thing to do. Here we critique this notion and claim that it does not always work. Figure 3 shows a data set generated from two distinct classes colored in blue and red. As you now know PCA, you gure projecting these points along roughly 45 tilting angle (indicated by the arrow for PCA on the bottom right of the gure) would give you the rst principal component that represents maximal variability. Note, however, that if our goal is to classify these points, this projection does only more harm than goodthe majority of blue and red points would land overlapped on the rst principal 7

component, hence PCA would confuse the classier! What went wrong? The answer is simplemaximal variability does not imply maximal discriminability. Although the hope of retaining large variances is to preserve useful information and discard noise, it is not always compatible with the class conguration of the data. The ideal projection in this particular example is at an angle perpendicular to the rst principal component (i.e. arrow for LDA in Figure 3), which clearly captures less variability yet yields almost perfect classication. Another way to explain why PCA does not work well here is that it does not make use of the class labelsrecall that PCA is a completely unsupervised algorithm that takes only the input X but not the class label vector (one could attempt to concatenate the labels to X but it is unlikely to helpthink about why). Here we briey introduce the concept of a supervised algorithm called linear discriminant analysis (LDA) that serves both as a dimension reduction method and a classier. So in case you get frustrated that your classier doesnt work after PCA preprocessing, at least you have something to resort to. Like PCA, LDA also yields a linear projection of the data. Diered from PCA, LDA exploits class labels for the data in determining its projection. Specically, LDA nds a projection that maximizes the following ratio R= pT C b p pT C i p (11)

where p is the projection vector (assuming a line projection for simplicity), Cb = T c (c )(c ) is the between-class variability and Ci = c nc (xnc T c )(xnc c ) is the within-class variability (here c is the class label, c is the mean of class c, is the mean of the entire data set, and xnc refers to data point n in class c). Intuitively what it means is that the projection seeks to maximize the dierence across classes meanwhile minimizing the variability within each class. Note that this criterion seems directly compatible with how well one could classify say two classes of dataif you project the data such that the two classes are further apart but are close-knitted within, it seems easy to distinguish them. Relating to Figure 3, the discriminant projection is the direction along the arrow for LDA. Imaging collapsing all the data on that projected line, you would obtain clear and non-overlapped blue and red points, yielding excellent classication. Note, however, that the maximal number of dimensions via LDA projection is C 1 where C is the total number of classes.

Further reading

A pretty good reference is a tutorial written by Jonathon Shlens entitled A Tutorial on Principal Component Analysis that links PCA to singular value decomposition. Chapter 4.1.44.1.6 in Chris Bishops book on PRML covers LDA in much more detail. 8

You might also like