# Difference between Chebyshev first and second degree iterative methods

Consider linear equation $Au = f$.

We want to solve it with iterative method (assuming $A$ is good).
First order iterative method is:

The second degree method is:

For both methods we can define iteration parameters $\alpha_k$, $\beta_k$ via minimax problem which solution is Chebyshev polynomials.

This is good, but it seems to me, that convergence speed is the same for both cases and is

where $u - u^k = \varepsilon^{k}$ approximation error after the $k$-th iteration and $\sigma$ is constant dependent on operator spectrum.

The only idea I have, that first order iterations optimal for chosen $k$ for which coefficients are computed, while second-order iteration is optimal on every step.

I would greatly appreciate any work on this to clear-up those details.

I think that the difference between the two method isn’t the degree, however the first method is a one-step method while the second one is a two-step method; therefor the convergence speed can be similar or the same; the choice depend by the advantage required.