Singular value decomposition (SVD) and principal component analysis (PCA) are two eigenvalue methods used to reduce a high-dimensional data set into fewer dimensions while retaining important information. Online articles say that these methods are ‘related’ but never specify the exact relation.
What is the intuitive relationship between PCA and SVD? As PCA uses the SVD in its calculation, clearly there is some ‘extra’ analysis done. What does PCA ‘pay attention’ to differently than the SVD? What kinds of relationships do each method utilize more in their calculations? Is one method ‘blind’ to a certain type of data that the other is not?
(I assume for the purposes of this answer that the data has been preprocessed to have zero mean.)
Simply put, the PCA viewpoint requires that one compute the eigenvalues and eigenvectors of the covariance matrix, which is the product 1n−1XX⊤, where X is the data matrix. Since the covariance matrix is symmetric, the matrix is diagonalizable, and the eigenvectors can be normalized such that they are orthonormal:
On the other hand, applying SVD to the data matrix X as follows:
and attempting to construct the covariance matrix from this decomposition gives
and since V is an orthogonal matrix (V⊤V=I),
and the correspondence is easily seen (the square roots of the eigenvalues of XX⊤ are the singular values of X, etc.)
In fact, using the SVD to perform PCA makes much better sense numerically than forming the covariance matrix to begin with, since the formation of XX⊤ can cause loss of precision. This is detailed in books on numerical linear algebra, but I’ll leave you with an example of a matrix that can be stable SVD’d, but forming XX⊤ can be disastrous, the Läuchli matrix:
where ϵ is a tiny number.