Arithmetic-geometric mean of 3 numbers

The arithmetic-geometric mean[1][2] of 2 numbers a and b is denoted AGM(a,b) and defined as follows:
The arithmetic-geometric mean can be expressed in a closed form using the complete elliptic integral of the first kind and elementary functions.

Let us try to generalize the arithmetic-geometric mean to 3 numbers a,b and c. One way to define it would be just as \operatorname{AGM}\left(\frac{a+b+c}3,\sqrt[3]{abc}\right). Apparently, this gives us nothing really new or interesting.

Let us consider a different approach:
\text{Let}\quad a_0=a,\quad b_0=b,\quad c_0=c,
\quad a_{n+1}=\operatorname{AGM}(b_n,c_n),\quad b_{n+1}=\operatorname{AGM}(a_n,c_n),\quad c_{n+1}=\operatorname{AGM}(a_n,b_n).
This gives us a function different than one in the previous approach. For example, we can calculate that
(you can see more digits here)

Have this function and its properties been already studied? What is known about it? Is it possible to express \operatorname{AGM}(a,b,c) (or, at least, some of its non-trivial special values) in a closed form using known special functions?


I did not find any generalisation of the classical AGM to more than 2 variables discussed in the literature under that name. (There are lots of other generalisations though; to matrices and more abstract noncommutative objects, weighted variants and integral expressions… In the comments it is suggested to consider Carlson-type integrals of the form I(\vec x) =∫_0^∞ \prod_i(t^2+x_i^2)^{-1/2}\,ω(t)\,\mathrm dt as a possible generalisation to more than two arguments, but the choice of a weight ω ensuring the averaging property appears not straightforward to me.)

As already remarked in the comments, the proposed formula based on iterating (a,b,c)\mapsto (AGM(b,c),AGM(a,c),AGM(a,b)) does not give the same result as the proposal made in this other StackExchange question, based on iterating (a,b,c)\mapsto (A(a,b,c),\sqrt{A(ab,bc,ac)},\sqrt[3]{abc}) (where A = average or arithmetic mean), which gives 1.90992623354….

Furthermore, it converges much more slowly in spite of using limiting processes (computation of AGM) in each iteration of the main limiting process, so it is computationally much more expensive.

Another drawback is that this idea does not generalise straightforwardly to more than 3 arguments: If you start out with 4, then you have 6 possible pairs and 2-argument AGMs, in the next step you have 21 pairs, etc.

Other possible generalisations, all of which give distinct results unless initially all the arguments are equal, could be:

  • do the same as in the second approach (based on symmetric polynomials), but with b\mapsto A(\sqrt{ab},\sqrt{bc},\sqrt{ac}), average of the roots instead of the root of the average, and similarly in the case of m\ge3 arguments. This was recently suggested by Brad Klee on the math-fun list. This yields 1.8932121…, a smaller value as one could anticipate from the arithmetic-geometric inequality.
  • use (a,b,c)\mapsto (AGM(AGM(a,b),c),AGM(AGM(b,c),a),AGM(AGM(a,c),b)), suggested by Keith Lynch in reply to the above [math-fun] post. This doesn’t generalise straightforwardly to more than 3 arguments, either: just nesting as AGM(AGM(AGM(a,b),c),d) & cyclic perms does not give a symmetric function, and using more permutations gives the same problem of ever growing number of components. For (1,2,3) this yields a value 1.90915044222…, very similar to your proposal. (Coincidentally, after the 7th digit (second ‘0’) which is the first to differ from your value, two more decimals (’44’) are the same!) It appears to converge faster than your proposal, outweighing the expense of computing twice as many binary AGM’s at each step.

In conclusion (maybe biased by personal taste), the most natural generalisation of the classical binary AGM based on ((a+b)/2, (ab)^{1/2}) to any number m of arguments appears (to me!) to use the k-th roots of the average of all products of k among the m variables. Using averages of the roots instead seems similarly natural, except that it requires more roots to be computed and not being expressible in terms of elementary symmetric polynomials. Using nested calls to the binary AGM on each iteration appears less attractive (to me!) for various reasons: (a) it mixes multiple iterative procedures that are supposed to be “similar” but aren’t, on different levels (iteration of transcendental functions AGM, with AGM defined through iteration of elementary functions, average and square root of product); (b) the generalisation to more than 3 arguments isn’t straightforward (which is to me also a lack of “naturality”); and (c) it is computationally much less efficient.

It would be interesting to confront these with yet other ideas (integral representations, …?) and compare them according to the same or additional criteria.

Source : Link , Question Author : Vladimir Reshetnikov , Answer Author : Max

Leave a Comment