Difference between Chebyshev first and second degree iterative methods

Consider linear equation Au=f.

We want to solve it with iterative method (assuming A is good).
First order iterative method is:
uk+1=ukαk+1(Aukf),
The second degree method is:
uk+1=ukαk+1(Aukf)β(ukuk1).

For both methods we can define iteration parameters αk, βk via minimax problem which solution is Chebyshev polynomials.

This is good, but it seems to me, that convergence speed is the same for both cases and is
εk2σk1+σ2kε0,
where uuk=εk approximation error after the k-th iteration and σ is constant dependent on operator spectrum.

The only idea I have, that first order iterations optimal for chosen k for which coefficients are computed, while second-order iteration is optimal on every step.

I would greatly appreciate any work on this to clear-up those details.

Answer

I think that the difference between the two method isn’t the degree, however the first method is a one-step method while the second one is a two-step method; therefor the convergence speed can be similar or the same; the choice depend by the advantage required.

Attribution
Source : Link , Question Author : Moonwalker , Answer Author : Martin Sleziak

Leave a Comment