What is the intuitive relationship between SVD and PCA?

Singular value decomposition (SVD) and principal component analysis (PCA) are two eigenvalue methods used to reduce a high-dimensional data set into fewer dimensions while retaining important information. Online articles say that these methods are ‘related’ but never specify the exact relation.

What is the intuitive relationship between PCA and SVD? As PCA uses the SVD in its calculation, clearly there is some ‘extra’ analysis done. What does PCA ‘pay attention’ to differently than the SVD? What kinds of relationships do each method utilize more in their calculations? Is one method ‘blind’ to a certain type of data that the other is not?

Answer

(I assume for the purposes of this answer that the data has been preprocessed to have zero mean.)

Simply put, the PCA viewpoint requires that one compute the eigenvalues and eigenvectors of the covariance matrix, which is the product 1n1XX, where X is the data matrix. Since the covariance matrix is symmetric, the matrix is diagonalizable, and the eigenvectors can be normalized such that they are orthonormal:

1n1XX=1n1WDW

On the other hand, applying SVD to the data matrix X as follows:

X=UΣV

and attempting to construct the covariance matrix from this decomposition gives
1n1XX=1n1(UΣV)(UΣV)=1n1(UΣV)(VΣU)

and since V is an orthogonal matrix (VV=I),

1n1XX=1n1UΣ2U

and the correspondence is easily seen (the square roots of the eigenvalues of XX are the singular values of X, etc.)

In fact, using the SVD to perform PCA makes much better sense numerically than forming the covariance matrix to begin with, since the formation of XX can cause loss of precision. This is detailed in books on numerical linear algebra, but I’ll leave you with an example of a matrix that can be stable SVD’d, but forming XX can be disastrous, the Läuchli matrix:

(111ϵ000ϵ000ϵ),

where ϵ is a tiny number.

Attribution
Source : Link , Question Author : wickedchicken , Answer Author : amWhy

Leave a Comment