Convergence of np(n)np(n) where p(n)=∑n−1j=⌈n/2⌉p(j)jp(n)=\sum_{j=\lceil n/2\rceil}^{n-1} {p(j)\over j}

Some years ago I was interested in the following Markov chain
whose state space is the positive integers. The chain begins at state “1”,
and from state “n” the chain next jumps to a state uniformly
selected from {n+1,n+2,…,2n}.

As time goes on, this chain goes to infinity, with occasional
large jumps. In any case, the chain is quite unlikely to hit any
particular large n.

If you define p(n) to be the probability that this chain
visits state “n”, then p(n) goes to zero like c/n for some
constant c. In fact,


In order to prove this convergence, I recast it as an analytic
problem. Using the Markov property, you can see that the sequence

p(1)=1 and p(n)=n1n/2p(j)j for n>1.

For some weeks, using generating functions etc. I tried and failed to find
an analytic proof of the convergence in (1). Finally, at a conference
in 2003 Tom Mountford showed me a (non-trivial) probabilistic proof.

So the result is true, but since then I’ve
continued to wonder if I missed something obvious. Perhaps there is
a standard technique for showing that (2) implies (1).

Question: Is there a direct (short?, analytic?) proof of (1)?

Perhaps someone who understands sequences better than I do could take a shot at this.

Update: I’m digging through my old notes on this. I now remember that I had a proof (using generating functions) that if  np(n) converges, then the limit is 12log(2)1. It was the convergence that eluded me.

I also found some curiosities like: n=1p(n)n(2n+1)=12.

Another update: Here is the conditional result mentioned above.

As in Qiaochu’s answer, define Q to be the generating function of p(n)/n, that is,
Q(t)=n=1p(n)ntn for 0t<1.
Differentiating gives
This is slightly different from Qiaochu's expression because
p(n)n1j=n/2p(j)j when n=1,
so that p(1) has to be treated separately.

Differentiating again and multiplying by 1t, we get
that is,

Assume that limnnp(n)=c exists. Letting t1 above the left hand side gives c,
while the right hand side is 1+2clog(2) and hence c=12log(2)1.

Note: j=1tjt2jj=log(1+t).

New update: (Sept. 2)

Here's an alternative proof of the conditional result that my colleague Terry Gannon
showed me in 2003.

Start with the sum 2Nn=2 p(n), substitute the formula in the title,
exchange the variables j and n, and rearrange to establish the identity:

12=2Nj=N+1jNj p(j).

If jp(j)c, then
1/2=limN2Nj=N+1jNj2 c=(log(2)1/2) c, so that

New update: (Sept. 8) Despite the nice answers and interesting discussion below, I am still holding out for an (nice?, short?) analytic proof of convergence. Basic Tauberian theory is allowed 🙂

New update: (Sept 13) I have posted a sketch of the probabilistic proof of convergence under "A fun and frustrating recurrence sequence" in the "Publications" section of my homepage.

Final Update: (Sept 15th) The deadline is approaching, so I have decided to award the bounty to T..
Modulo the details(!), it seems that the probabilistic approach is the most likely to lead to a proof.

My sincere thanks to everyone who worked on the problem, including those who tried it but
didn't post anything.

In a sense, I did get an answer to my question: there doesn't seem to be an easy, or standard
proof to handle this particular sequence.


Update: the following probabilistic argument I had posted earlier shows only that p(1)+p(2)++p(n)=(c+o(1))log(n) and not, as originally claimed, the convergence np(n)c. Until a complete proof is available [edit: George has provided one in another answer] it is not clear whether np(n) converges or has some oscillation, and at the moment there is evidence in both directions. Log-periodic or other slow oscillation is known to occur in some problems where the recursion accesses many previous terms. Actually, everything I can calculate about np(n) is consistent with, and in some ways suggestive of, log-periodic fluctuations, with convergence being the special case where the bounds could somehow be strengthened and the fluctuation scale thus squeezed down to zero.

p(n)c/n is [edit: only in average] equivalent to p(1) + p(2) + \dots + p(n) being asymptotic to c \log(n). The sum up to p(n) is the expected time the walk spends in the interval [1,n]. For this quantity there is a simple probabilistic argument that explains (and can rigorously demonstrate) the asymptotics.

This Markov chain is a discrete approximation to a log-normal random walk. If X is the position of the particle, \log X behaves like a simple random walk with steps \mu \pm \sigma where \mu = 2 \log 2 - 1 = 1/c and \sigma^2 = (1- \mu)^2/2. This is true because the Markov chain is bounded between two easily analyzed random walks with continuous steps.

(Let X be the position of the particle and n the number of steps; the walk starts at X=1, n=1.)

Lower bound walk L: at each step, multiply X by a uniform random number in [1,2] and replace n by (n+1). \log L increases, on average, by \int_1^2 log(t) dt = 2 \log(2) - 1 at each step.

Upper bound walk U: at each step, jump from X to uniform random number in [X+1,2X+1] and replace n by (n+1).

L and U have means and variances that are the same within O(1/n), where the O() constants can be made explicit. Steps of L are i.i.d and steps of U are independent, asymptotically identical-distributed. Thus, the Central Limit theorem shows that \log X after n steps is approximately a Gaussian with mean n\mu + O(\log n) and variance n\sigma^2 + O(\log n).

The number of steps for the particle to escape the interval [1,t] is therefore ({\log t})/\mu with fluctuations of size A \sqrt{\log t} having probability that decays rapidly in A (bounded by |A|^p \exp(-qA^2) for suitable constants). Thus, the sum p(1) + p(2) + ... p(n) is asymptotically equivalent to (\log n)/(2\log (2)-1).

Maybe this is equivalent to the 2003 argument from the conference. If the goal is to get a proof from the generating function, it suggests that dividing by (1-x) may be useful for smoothing the p(n)'s.

Source : Link , Question Author : Community , Answer Author : T..

Leave a Comment