Necessity/Advantage of LU Decomposition over Gaussian Elimination

I am reading the book “Introduction to Linear Algebra” by Gilbert Strang and couldn’t help wondering the advantages of LU decomposition over Gaussian Elimination!

For a system of linear equations in the form Ax=b, one of the methods to solve the unknowns is Gaussian Elimination, where you form a upper triangular matrix U by forward elimination and then figure out the unknowns by backward substitution. This serves the purpose of solving a system of linear equations. What was necessity for LU Decomposition, i.e. after finding U by forward elimination, why do we go about finding L (the lower triangular matrix) when you already had U and could have done a backward substitution?


In many engineering applications, when you solve Ax=b, the matrix ARN×N remains unchanged, while the right hand side vector b keeps changing.

A typical example is when you are solving a partial differential equation for different forcing functions. For these different forcing functions, the meshing is usually kept the same. The matrix A only depends on the mesh parameters and hence remains unchanged for the different forcing functions. However, the right hand side vector b changes for each of the forcing function.

Another example is when you are solving a time dependent problem, where the unknowns evolve with time. In this case again, if the time stepping is constant across different time instants, then again the matrix A remains unchanged and the only the right hand side vector b changes at each time step.

The key idea behind solving using the LU factorization (for that matter any factorization) is to decouple the factorization phase (usually computationally expensive) from the actual solving phase. The factorization phase only needs the matrix A, while the actual solving phase makes use of the factored form of A and the right hand side to solve the linear system. Hence, once we have the factorization, we can make use of the factored form of A, to solve for different right hand sides at a relatively moderate computational cost.

The cost of factorizing the matrix A into LU is O(N3). Once you have this factorization, the cost of solving i.e. the cost of solving LUx=b is just O(N2), since the cost of solving a triangular system scales as O(N2).

(Note that to solve LUx=b, you first solve Ly=b and then Ux=y. Solving Ly=b and Ux=y costs O(N2).)

Hence, if you have ‘r‘ right hand side vectors {b1,b2,,br}, once you have the LU factorization of the matrix A, the total cost to solve Ax1=b1,Ax2=b2,,Axr=br scales as O(N3+rN2).

On the other hand, if you do Gauss elimination separately for each right hand side vector bj, then the total cost scales as O(rN3), since each Gauss elimination independently costs O(N3).

However, typically when people say Gauss elimination, they usually refer to LU decomposition and not to the method of solving each right hand side completely independently.

Source : Link , Question Author : Chethan Ravindranath , Answer Author : Community

Leave a Comment