# Why is the eigenvector of a covariance matrix equal to a principal component?

If I have a covariance matrix for a data set and I multiply it times one of it’s eigenvectors. Let’s say the eigenvector with the highest eigenvalue. The result is the eigenvector or a scaled version of the eigenvector.

What does this really tell me? Why is this the principal component? What property makes it a principal component? Geometrically, I understand that the principal component (eigenvector) will be sloped at the general slope of the data (loosely speaking). Again, can someone help understand why this happens?

Long answer: Let’s say you want to reduce the dimensionality of your data set, say down to just one dimension. In general, this means picking a unit vector $$uu$$, and replacing each data point, $$xix_i$$, with its projection along this vector, $$uTxiu^T x_i$$. Of course, you should choose $$uu$$ so that you retain as much of the variation of the data points as possible: if your data points lay along a line and you picked $$uu$$ orthogonal to that line, all the data points would project onto the same value, and you would lose almost all the information in the data set! So you would like to maximize the variance of the new data values $$uTxiu^T x_i$$. It’s not hard to show that if the covariance matrix of the original data points $$xix_i$$ was $$Σ\Sigma$$, the variance of the new data points is just $$uTΣuu^T \Sigma u$$. As $$Σ\Sigma$$ is symmetric, the unit vector $$uu$$ which maximizes $$uTΣuu^T \Sigma u$$ is nothing but the eigenvector with the largest eigenvalue.
If you want to retain more than one dimension of your data set, in principle what you can do is first find the largest principal component, call it $$u1u_1$$, then subtract that out from all the data points to get a “flattened” data set that has no variance along $$u1u_1$$. Find the principal component of this flattened data set, call it $$u2u_2$$. If you stopped here, $$u1u_1$$ and $$u2u_2$$ would be a basis of the two-dimensional subspace which retains the most variance of the original data; or, you can repeat the process and get as many dimensions as you want. As it turns out, all the vectors $$u1,u2,…u_1, u_2, \ldots$$ you get from this process are just the eigenvectors of $$Σ\Sigma$$ in decreasing order of eigenvalue. That’s why these are the principal components of the data set.