I was doing some software engineering and wanted to have a thread do something in the background to basically just waste CPU time for a certain test.
While I could have done something really boring like
for(i < 10000000) { j = 2 * i }
, I ended up having the program start with 1, and then for a million steps choose a random real number r in the interval [0,R] (uniformly distributed) and multiply the result by r at each step.
- When R=2, it converged to 0.
- When R=3, it exploded to infinity.
So of course, the question anyone with a modicum of curiosity would ask: for what R do we have the transition. And then, I tried the first number between 2 and 3 that we would all think of, Euler’s number e, and sure enough, this conjecture was right. Would love to see a proof of this.
Now when I should be working, I’m instead wondering about the behavior of this script.
Ironically, rather than wasting my CPUs time, I’m wasting my own time. But it’s a beautiful phenomenon. I don’t regret it. ¨⌣
Answer
EDIT: I saw that you solved it yourself. Congrats! I’m posting this anyway because I was most of the way through typing it when your answer hit.
Infinite products are hard, in general; infinite sums are better, because we have lots of tools at our disposal for handling them. Fortunately, we can always turn a product into a sum via a logarithm.
Let Xi∼Uniform(0,r), and let Yn=∏ni=1Xi. Note that log(Yn)=∑ni=1log(Xi). The eventual emergence of e as important is already somewhat clear, even though we haven’t really done anything yet.
The more useful formulation here is that log(Yn)n=1n∑log(Xi), because we know from the Strong Law of Large Numbers that the right side converges almost surely to E[log(Xi)]. We have
Elog(Xi)=∫r0log(x)⋅1rdx=1r[xlog(x)−x]|r0=log(r)−1.
If r<e, then log(Yn)/n→c<0, which implies that log(Yn)→−∞, hence Yn→0. Similarly, if r>e, then log(Yn)/n→c>0, whence Yn→∞. The fun case is: what happens when r=e?
Attribution
Source : Link , Question Author : Jake Mirra , Answer Author : Aaron Montgomery