I’m reading a tutorial on expectation maximization which gives an example of a coin flipping experiment (the description is at http://www.nature.com/nbt/journal/v26/n8/full/nbt1406.html?pagewanted=all). Could you please help me understand where the probabilities in step 2 of the process (i.e. in the middle of part b in the below illustration) come from? Thank you.

- EM starts with an initial guess of the parameters. 2. In the E-step, a probability distribution over possible completions is computed using the current parameters. The counts shown in the table are the expected numbers of heads and tails according to this distribution. 3. In the M-step, new parameters are determined using the current completions. 4. After several repetitions of the E-step and M-step, the algorithm converges.

**Answer**

These are the likelihoods of the corresponding set of 10 coin tosses having been produced by the two coins (using the current estimate for their biases) normalized to add up to 1. The estimated probability of k out of 10 tosses of coin i (i∈{A,B}) yielding heads is

pi(k)=(10k)θki(1−θi)10−k.

The binomial coefficient is the same for both coins, so it drops out in the normalization, and only the ratio of the remaining factors determines the result.

For instance, in the second row, we have 9 heads and 1 tails. Given the current bias estimates θA=0.6 and θB=0.5, the factors are

θ9A(1−θA)10−9≃0.004

and

θ9B(1−θB)10−9≃0.001,

resulting in the numbers

0.0040.004+0.001=0.8

and

0.0010.004+0.001=0.2

in the second row.

**Attribution***Source : Link , Question Author : Martin08 , Answer Author : joriki*