Wikipedia defines an eigenvector like this:

An eigenvector of a square matrix is a non-zero vector that, when multiplied by the matrix, yields a vector that differs from the original vector at most by a multiplicative scalar.So basically in layman language: An eigenvector is a vector that when you multiply it by a square matrix, you get the same vector or the same vector multiplied by a scalar.

There are a lot of terms which are related to this like eigenspaces and eigenvalues and eigenbases and such, which I don’t quite understand, in fact, I don’t understand at all.

Can someone give an explanation connecting these terms? So that it is clear what they are and why they are related.

**Answer**

Eigenvectors are those vectors that exhibit especially simple behaviour under a linear transformation: Loosely speaking, they don’t bend and rotate, they simply grow (or shrink) in length (though a different interpretation of growth/shrinkage may apply if the ground field is not R). If it is possible to express any other vector as a linear combination of eigenvectors (preferably if you can in fact find a whole basis made of eigenvectors) then applying the – otherwise complicated – linear transformation suddenly becomes easy because with respect to a basis of eigenvectors the linear transformation is given simply by a diagonal matrix.

Especially when one wants to investigate higher powers of a linear transformation, this is practically only possible for eigenvectors: If Av=λv, then Anv=λnv, and even exponentials become easy for eigenvectors: exp(A)v:=∑1n!Anv=eλv.

By the way, the exponential functions x↦ecx are eigenvectors of a famous linear tranformation: differentiation, i.e. mapping a function f to its derivative f′. That’s precisely why exponetials play an important role as base solutions for linear differential equations (or even their discrete counterpart, linear recurrences like the Fibonacci numbers).

All other terminology is based on this notion:

An (nonzero) **eigenvector** v such that Av is a multiple of v determines its **eigenvalue** λ as the scalar factor such that Av=λv.

Given an eigenvalue λ, the set of eigenvectors with that eigenvalue is in fact a subspace (i.e. sums and multiples of eigenvectors with the same(!) eigenvalue are again eigen), called the **eigenspace** for λ.

If we find a basis consisting of eigenvectors, then we may obviously call it **eigenbasis**. If the vectors of our vector space are not mere number tuples (such as in R3) but are also functions and our linear transformation is an operator (such as differentiation), it is often convenient to call the eigenvectors **eigenfunctions** instead; for example, x↦e3x is an eigenfunction of the differentiation operator with eigenvalue 3 (because the derivative of it is x↦3e3x).

**Attribution***Source : Link , Question Author : Eigeneverything , Answer Author : Hagen von Eitzen*