Proofs of AM-GM inequality

The arithmetic – geometric mean inequality states that
x1++xnnnx1xn
I’m looking for some original proofs of this inequality. I can find the usual proofs on the internet but I was wondering if someone knew a proof that is unexpected in some way. e.g. can you link the theorem to some famous theorem, can you find a non-trivial geometric proof (I can find some of those), proofs that use theory that doesn’t link to this inequality at first sight (e.g. differential equations …)?

Induction, backward induction, use of Jensen inequality, swapping terms, use of Lagrange multiplier, proof using thermodynamics (yeah, I know, it’s rather some physical argument that this theorem might be true, not really a proof), convexity, … are some of the proofs I know.

Answer

This is a fairly old answer of mine with a proof that was not very motivated, so I thought I’d update it with some geometric intuition about convexity.

Consider for simplicity the two-variable case (a+b)/2ab and fix, say a=1. The plot of (1+b)/2 and b show intuitively how the concave nature of the geometric mean implies it will always lie below the arithmetic mean. Also observe the equality at one point. In fact, this concavity will extend to any number of variables, but obviously a plot is not a proof.

plot example

The proof for more than two variables presented requires elementary properties of logarithms which transforms dealing with multiplication to dealing with addition. The inequality is rewritten in terms of logarithms:

x1++xnn(x1xn)1/n

Taking logs preserves the inequality since log is an increasing function:

log(x1++xnn)1nlog(x1xn)=logx1++logxnn


If we write E[X] as the mean of xi‘s and E[log(X)] as the mean of logxi‘s, we can also understand this in the language of expectation:

log(E[X])E[log(X)]

Using the concavity of log, by Jensen’s Inequality (proved inductively starting from the definition of convexity), the inequality holds.


Original post of Pólya’s Proof, using similar ideas of convexity of ex:

Let f(x)=ex1x. The first derivative is f(x)=ex11 and the second derivative is f.

f is convex everywhere because f”(x) > 0, and has a minimum at x=1. Therefore x \le e^{x-1} for all x, and the equation is only equal when x=1.

Using this inequality we get

\frac{x_1}{a} \frac{x_2}{a} \cdots \frac{x_n}{a} \le e^{\frac{x_1}{a}-1} e^{\frac{x_2}{a}-1} \cdots e^{\frac{x_n}{a}-1}

with a being the arithmetic mean. The right side simplifies

\exp \left(\frac{x_1}{a} -1 \ +\frac{x_1}{a} -1 \ + \cdots + \frac{x_n}{a} -1 \right)

=\exp \left(\frac{x_1 + x_2 + \cdots + x_n}{a} – n \right) = \exp(n – n) = e^0 = 1

Going back to the first inequality

\frac{x_1x_2\cdots x_n}{a^n} \le 1

So we end with

\sqrt[n]{x_1x_2\cdots x_n} \le a

Attribution
Source : Link , Question Author : Michiel Van Couwenberghe , Answer Author : qwr

Leave a Comment