I am trying to understand how – exactly – I go about projecting a vector onto a subspace.

Now, I know enough about linear algebra to know about projections, dot products, spans, etc etc, so I am not sure if I am reading too much into this, or if this is something that I have missed.

For a class I am taking, the proff is saying that we take a vector, and ‘simply project it onto a subspace’, (where that subspace is formed from a set of orthogonal basis vectors).

Now, I know that a subspace is really, at the end of the day, just a set of vectors. (That satisfy properties here). I get that part – that its this set of vectors. So, how do I “project a vector on this subspace”?

Am I projecting my one vector, (lets call it a[n]) onto ALL the vectors in this subspace? (What if there is an infinite number of them?)

For further context, the proff was saying that lets say we found a set of basis vectors for a signal, (lets call them b[n] and c[n]) then we would project a[n] onto its signal subspace. We project a[n] onto the signal-subspace formed by b[n] and c[n]. Well, how is this done exactly?..

Thanks in advance, let me know if I can clarify anything!

P.S. I appreciate your help, and I would really like for the clarification to this problem to be somewhat ‘concrete’ – for example, something that I can show for myself over MATLAB. Analogues using 2-D or 3-D space so that I can visualize what is going on would be very much appreciated as well.

Thanks again.

**Answer**

I will talk about orthogonal projection here.

When one projects a vector, say v, onto a subspace, you find the vector in the subspace which is “closest” to v. The simplest case is of course if v is already in the subspace, then the projection of v onto the subspace is v itself.

Now, the simplest kind of subspace is a one dimensional subspace, say the subspace is U=span(u). Given an arbitrary vector v not in U, we can project it onto U by

v‖

which will be a vector in U. There will be more vectors than v that have the same projection onto U.

Now, let’s assume U = \operatorname{span}(u_1, u_2, \dots, u_k) and, since you said so in your question, assume that the u_i are orthogonal. For a vector v, you can project v onto U by

v_{\| U} = \sum_{i =1}^k \frac{\langle v, u_i\rangle}{\langle u_i, u_i \rangle} u_i = \frac{\langle v , u_1 \rangle}{\langle u_1 , u_1 \rangle} u_1 + \dots + \frac{\langle v , u_k \rangle}{\langle u_k , u_k \rangle} u_k.

**Attribution***Source : Link , Question Author : Spacey , Answer Author : Calle*