A number of areas I’m studying in my degree (not a maths degree) involve eigenvalues and eigvenvectors, which have never been properly explained to me. I find it very difficult to understand the explanations given in textbooks and lectures. Does anyone know of a good, fairly simple but mathematical explanation of eigenvectors and eigenvalues on the internet? If not, could someone provide one here?

As well as some of the mathematical explanations, I’m also very interested in ‘big picture’ answers as to why on earth I should care about eigenvectors/eigenvalues, and what they actually ‘mean’.

**Answer**

To understand why you encounter eigenvalues/eigenvectors everywhere, you must first understand why you encounter **matrices** and **vectors** everywhere.

In a vast number of situations, the objects you study and the stuff you can do with them relate to vectors and linear transformations, which are represented as matrices.

So, in many many interesting situations, important relations are expressed as

→y=M→x

where →y and →x are vectors and M is a matrix. This ranges from systems of linear equations you have to solve (which occurs virtually everywhere in science and engineering) to more sophisticated engineering problems (finite element simulations). It also is the foundation for (a lot of) quantum mechanics. It is further used to describe the typical geometric transformations you can do with vector graphics and 3D graphics in computer games.

Now, it is generally not straight forward to look at some matrix M and immediately tell what it is going to do when you multiply it with some vector →x. Also, in the study of iterative algorithms you need to know something about higher powers of the matrix M, i.e. Mk=M⋅M⋅...M, k times. This is a bit awkward and costly to compute in a naive fashion.

For a lot of matrices, you can find special vectors with a very simple relationship between the vector →x itself, and the vector →y=Mx. For example, if you look at the matrix (0110), you see that the vector (11) when multiplied with the matrix will just give you that vector again!

For such a vector, it is *very* easy to see what M→x looks like, and even what Mk→x looks like, since, obviously, repeated application won’t change it.

This observation is generalized by the concept of eigenvectors. An eigenvector of a matrix M is any vector \vec{x} that only gets scaled (i.e. just multiplied by a number) when multiplied with M. Formally,

M\vec{x} = \lambda \vec{x}

for some number \lambda (real or complex depending on the matrices you are looking at).

So, if your matrix M describes a system of some sort, the eigenvectors are those vectors that, when they go through the system, are changed in a very easy way. If M, for example, describes geometric operations, then M could, in principle, stretch and rotate your vectors. But eigenvectors only get stretched, not rotated.

The next important concept is that of an **eigenbasis**. By choosing a different basis for your vector space, you can alter the appearance of the matrix M in that basis. Simply speaking, the i-th column of M tells you what the i-th basis vector multiplied with M would look like. If all your basis vectors are also eigenvectors, then it is not hard to see that the matrix M is *diagonal*. Diagonal matrices are a welcome sight, because they are *really* easy to deal with: Matrix-vector and Matrix-matrix multiplication becomes very efficient, and computing the k-th power of a diagonal matrix is also trivial.

I think for a “broad” introduction this might suffice?

**Attribution***Source : Link , Question Author : robintw , Answer Author : Lagerbaer*