Time to reach a final state in a random dynamical system (answer known, proof unknown)

Consider a dynamical system with state space $2^n$ represented as a sequence of $n$ black or white characters, such as $BWBB\ldots WB$.

At every step, we choose a random pair $(i,j)$ with $i<j$ and copy the $i$-th color into the $j$-th position. For example, if the state is $BWW$ and we choose $(1,3)$, then the next state would be $BWB$.

Starting with an initial state $BWW\ldots W$, we will reach a final state $BB\ldots B$ in finite time with probability $1$ (because any initial segment of blacks persists forever).

What is the expected time to reach this final state $BB\ldots B$?

Low-dimensional analysis shows that the answer is exactly $(n-1)^2$ whenever $n>2$; this was “verified” up to $n=13$ or so. But we can’t find a proof of this conjecture, nor even show that the answer is an integer. One expects that, if the answer is that easy, there should also be an elementary proof.


I tried various kinds of induction. For example, I tried to use induction to compute the expected time to reach the state $BB\ldots B*$ where $*$ is something, and reduced the initial conjecture to a claim that the expected time from $BB\ldots B*$ to $BB\ldots BB$ is exactly $1$. But I don’t know how to prove that it is $1$.

Answer

This is now a complete answer.

It turns out that Markus was right all along : with this particular starting configuration, the number of black balls is a Markov chain, and at any point in time, the distribution of the sequence is uniform conditionned on the number of black balls in the space of sequences starting with a black ball.


Let $\Omega$ be the set of sequences of $0$ and $1$ of length $n$ starting with $1$.

To show that the number of black balls is a Markov chain it is enough to show that if we are given a random sequence in $\Omega$ distributed uniformly conditioned on the number of black balls and apply one step of the process, then we still get a sequence distributed uniformly conditioned on the number of black balls.

More formally, let $\Omega’ = \{1 \ldots n\}$ be the possible number of black balls and $f : \Omega \to \Omega’$ be the counting map.
Let $F,F’$ be the free vector spaces on the basis $\Omega$ and $\Omega’$ respectively.
Your process is described by a linear endormorphism $T : F \to F$, and $f$ induces a linear map $f : F \to F’$.
Let $g : F’ \to F$ be the map sending $1.[k]$ for $k \in \Omega’$ to $\frac 1{|f^{-1}(k)|} \sum_{f^{-1}(k)} 1.[\omega]$.

We obviously have that $f \circ g = id_{F’}$.
If we prove that the image of $g$ is stable by $T$ then there is a map $T’ : F’ \to F’$ such that $T \circ g = g \circ T’$.
Then we get (I will omit the compositions from now on)
$gfTgf = gf(Tg)f = gf(gT’)f = g(fg)T’f = gT’f = Tgf$.

By induction it follows that $T^ngf = (Tgf)^n$ and $fT^ngf = (fTg)^nf = T’^n f$

Since we are only interested in the behaviour of $fT^n(V)$ for the initial vector $V = 1.[10\ldots0] \in F$,
and because we have $gfV=V$, we are interested in the behaviour of
$fT^n(V) = fT^ngf(V) = T’^n f(V) = T’^n(1.[1])$.

That is, $T’$ is the transition matrix of a Markov chain on $\Omega’$
and we want to know what’s the expected time to get from $1$ to $n$.


Intuitively, if you interpret $gf$ as a process that randomly permutes the $n-1$ rightmost balls, this says that the two processes
“randomize ; apply step ; randomize” and “randomize ; apply step” are equivalent.

Then for any $k$, the process “initialize ; randomize ;( apply step ; randomize) x $k$”
is equivalent to the process “initialize ; randomize ; apply step x $k$”.

And since the initial configuration is already randomized, “initialize ; randomize” is equivalent to “initialize”.

Therefore the original process is equivalent to “initialize ; randomize ; (apply step ; randomize) x $k$”, where the number of black balls is obviously a Markov chain


Now I will show that the image of $g$.is stable by $T$.
Let $N = n(n-1)/2$ be the number of pairs $(i,j)$ with $1 \le i < j \le n$.

To prove this you essentially to count things up. The “surprising” part is that
to obtain a particular sequence as a result of a non trivial copy move, the
number of ways to obtain it does not depend on the order of the balls
If you want to get a sequence with $4$ ones by copying a $1$ onto a $0$,
you have to choose two $1$ then replace the right one with a $0$.

Even though you get some input sequences more times than others, since they are all equiprobable,
the resulting probability for each output sequence doesn’t depend on the order of the balls.

Conversely, some inputs have a tendency to increase the sum, and some others have a tendency to decrease the sum,
but really, when you sum up every permutation of the inputs, you do get something uniform inside each equivalence class.


To compute $T(g(k))$, consider a random sequence that has sum $k$ and is uniformly distributed, so that every sequence of sum $k$ has probability $1/\binom {k-1}{n-1}$ to be the input.

For each possible output sequence of sum $k$, consider how many times it can be reached.
To get a sequence of sum $k$ you need to make a move that does nothing, and for each output sequence there are $k(k-1)/2+(n-k)(n-k-1)/2$ moves that do this, so every sequence of sum $k$ is reached in total with probability $(k(k-1)+(n-k)(n-k-1))/(2N \binom {k-1}{n-1})$,
this is $ (k(k-1)+(n-k)(n-k-1))/2N \;.g(k)$.

Now consider the output sequences of sum $k+1$ (if $k<n$).
To get a sequence of sum $k+1$ you need to copy a $1$ onto a $0$.
For each output, you have $k$ sequences of sum $k$ that can produce it,
one that produces it in one case, the next one in two cases, and so on.
Since every sequence of sum $k$ is equally probable, each output sequence can be reached
with equal probability $k(k+1)/(2N \binom {k-1}{n-1})$, so summing up everything
this is $k(k+1)\binom {k}{n-1}/(2N\binom {k-1}{n-1})\;. g(k+1) = (k+1)(n-k)/2N \;. g(k+1)$

Finally, consider the output sequences of sum $k-1$ (if $k>0$)
To get a sequence of sum $k-1$ you need to copy a $0$ onto a $1$.
For each output, you have $n-k$ sequences of sum $k$ that can produce it,
one that produces it in one case, the next one in two cases, and so on.
Since every sequence of sum $k$ is equally probable, each output sequence can be reached
with equal probability $(n-k)(n-k+1)/(2N \binom {k-1}{n-1})$, so summing up everything
this is $(n-k)(n-k+1)\binom {k-2}{n-1}/(2N\binom {k-1}{n-1}) \;. g(k-1) = (n-k)(k-1)/2N \;. g(k-1)$

And $k(k-1)+(n-k)(n-k-1) + (k+1)(n-k) + (n-k)(k-1) = n(k-1)+(n-k)n = n^2-n = 2N$ so the three terms sum to $1$ as needed.


Finally, we can use the fact that the sequences are uniformly distributed conditioned on the number of black balls to prove that $E[T_n – T_{n-1}] = 1$.

Consider at each time, the probabilities that we go from a particular sequence with $n-1$ $1$s to the final sequence with $n$ $1$s.

At every time, the $n-1$ sequences with sum $n-1$ are equally probable (say $p_t$), then a simple counting argument says that the final sequence comes from $1\ldots10$ with probability $(n-1)p_t/N = 2p_t/n$ and it comes from one of the other $n-2$ other sequences with probability $(1+2+\ldots+n-2)p_t/N = (n-2)(n-1)p_t/2N = (n-2)p_t/n$.

Summing up those events for every time $t$, the probability that the last step comes from $1\ldots10$ is $2/n$, which means that $P(T_{n-1} < T_n) = 2/n$, and that $P(T_{n-1} = T_n) = (n-2)/n$.

But if $T_{n-1} < T_n$ then the expected time for the last step is $N/(n-1) = n/2$ (because we are stuck on $1\ldots10$ until we move to $1\ldots 11$, which happens with probability $2/n$)

Thus $E[T_n – T_{n-1}] = 2/n \times n/2 + 0 \times (n-2)/2 = 1$.


You can in fact easily prove @leonbloy’s conjecture from the observation that $P(T_{n} > T_{n-1}) = 2/n$.

If you hide the last $n-k$ balls, then what you see is a mix of the same process on the first $k$ balls and steps that
would copy one of visible balls to one of the hidden balls, and so steps that don’t visibly do anything.
Therefore, even though you see a slowed process, this doesn’t influence the probabilities $P(T_{k-1} > T_{k}) = 2/k$.

From then, it’s easy to get the expected time between each step, because if we have equality then $T_{k-1} – T_{k} = 0$,
and if not, at every step between them you have a probability $(k-1)/N$ of coloring the $k$th ball black, and so it takes $N/(k-1)$ steps.
Finally, the expected difference is $N/(k-1) \times 2/k = n(n-1)/k(k-1)$.

Attribution
Source : Link , Question Author : Peter Franek , Answer Author : mercio

Leave a Comment