# how does expectation maximization work?

I’m reading a tutorial on expectation maximization which gives an example of a coin flipping experiment (the description is at http://www.nature.com/nbt/journal/v26/n8/full/nbt1406.html?pagewanted=all). Could you please help me understand where the probabilities in step 2 of the process (i.e. in the middle of part b in the below illustration) come from? Thank you. 1. EM starts with an initial guess of the parameters. 2. In the E-step, a probability distribution over possible completions is computed using the current parameters. The counts shown in the table are the expected numbers of heads and tails according to this distribution. 3. In the M-step, new parameters are determined using the current completions. 4. After several repetitions of the E-step and M-step, the algorithm converges.

These are the likelihoods of the corresponding set of $10$ coin tosses having been produced by the two coins (using the current estimate for their biases) normalized to add up to $1$. The estimated probability of $k$ out of $10$ tosses of coin $i$ ($i\in\{A,B\}$) yielding heads is

The binomial coefficient is the same for both coins, so it drops out in the normalization, and only the ratio of the remaining factors determines the result.

For instance, in the second row, we have $9$ heads and $1$ tails. Given the current bias estimates $\theta_A=0.6$ and $\theta_B=0.5$, the factors are

and

resulting in the numbers

and

in the second row.