# Simplest proof of Taylor’s theorem

I have for some time been trawling through the Internet looking for an aesthetic proof of Taylor’s theorem.

By which I mean this: there are plenty of proofs that introduce some arbitrary construct: no mention is given of from whence this beast came. and you can logically hack away line by line until the thing is solved. but this kind of proof is ugly. a beautiful proof should rise naturally from the ground.

I’ve seen one proof claiming to do it from the fundamental theorem of calculus. It looked messy.

I’ve seen several attempts to use integration by parts repeatedly. But surely it would be tidier to do this without bringing in all of that extra machinery.

The nicest two approaches seem to involve using the mean value theorem and Rolle’s theorem. but I can’t find a lucid presentation of either approach.

Maybe my brain is unusually stupid, and the approaches on Wikipedia etc are perfectly good enough for everyone else.

Does anyone have a crystal clear understanding of this phenomenon? Or a web-link to such an understanding?

*EDIT*: Eventually a Cambridge mathematician explained it to me in a way that I could understand, and I have written up the proof here. To my mind it is the most instructional proof I have encountered, yet putting it as an answer received mostly downvotes. It seems strange to me that no one else seems to concur. But it should be up to the keenest mathematical minds to choose which answer should be accepted. It shouldn’t be up to me. Therefore I will bow to the wisdom of the community, and accept the currently most-upvoted answer. I have learned from Machine Learning that a “Committee of Experts” outperforms any one expert, and I am certainly no expert.

Here is an approach that seems rather natural,
based on applying the fundamental theorem
of calculus successively to $f(x)$, $f'(t_1)$, $f''(t_2)$, etc.:

Notice that

and in general

By induction, then, one proves

where $P_n$ is the Taylor polynomial

and the remainder $R_n(x)$ is represented by nested integrals as

We can establish the Lagrange form of the remainder by applying the intermediate
and extreme value
theorems, using simple comparisons as follows. Consider the case $x>a$ first.
Let $m$ be the minimum value of $f^{(n+1)}$ on $[a,x]$, and $M$ the maximum value.
Then since

for all $t_{n+1}$ in $[a,x]$, after $n+1$ repeated integrations one finds

But now, notice that the function

attains the extreme values

at some points in $[a,x]$. By the intermediate value theorem, there must be
some point $t$ between these two points (so $t\in[a,x]$) such that

This is the Lagrange form of the remainder.
If $x and $n$ is odd, the same proof works. If $x and $n$ is even,
$(x-a)^{n+1}<0$ and the same proof works after reversing some inequalities.

One can motivate this whole approach in a couple of different ways.
E.g., one can argue that
${(x-a)^n}/{n!}$
becomes small for large $n$, so the remainders $R_n(x)$ will become small
if the derivatives of $f$ stay bounded, say.

Or, one can reason loosely as follows: $f(x)\approx f(a)$ for $x$ near $a$.
Ask, what is the remainder exactly? Apply the fundamental theorem as above, then
approximate the first remainder using the approximation $f'(t_1)\approx f'(a)$.
Repeating, one produces the Taylor polynomials by the pattern of the argument above.