# Better Proofs Than Rudin’s For The Inverse And Implicit Function Theorems

I am finding Rudin’s proofs of these theorems very non-intuitive and difficult to recall. I can understand and follow both as I work through them, but if you were to ask me a week later to prove one or the other, I couldn’t do it.

For instance, the use of a contraction mapping in the inverse function theorem seems to require one to memorize, at the very least, a non-obvious (at least to me) function (viz. $\phi(\mathbf{x}) = \mathbf{x} + \mathbf{A}^{-1}(\mathbf{y}-\operatorname{f}(\mathbf{x}))$) and constant (viz. $\lambda^{-1} = 2 \Vert \mathbf{A}^{-1}\Vert$), where $\mathbf{A}$ is the differential of $\operatorname{f}$ at $\mathbf{a}$.

The implicit function theorem proof, while not as bad, also requires one to construct a new function without ever hinting as to what the motivation is.

I searched the previous questions on this site and haven’t found this addressed, so I figured I’d ask. I did finnd this proof to have a much more intuitive approach to the inverse function theorem, but would like to see what proofs are preferred by others.

Suppose you want to find the inverse of the mapping $F: \mathbb{R}^n \rightarrow \mathbb{R}^n$ near a point $x_o$ where $F'(x_o)$ is invertible. The derivative (Jacobian matrix) provides an approximate form for the map $F(x) = F(x_o)+F'(x_o)(x-x_o)+\eta$. If you set $y = F(x)$ and ignore the error term $\eta$ then solving for $x$ gives us the first approximation to the inverse mapping.
$$x = x_o+[F'(x_o)]^{-1}(y-F(x_o)).$$
I don’t know if this helps or not, but really the approach is almost brute force, to invert $F(x)=y$ what do you do? You solve for $x$. We can’t do that abstractly for $F$ so instead we solve the next best thing, the linearization. Then the beauty of the contraction mapping technique completes the argument.