My understanding right now is that an example of conditional independence would be:
If two people live in the same city, the probability that person A gets home in time for dinner, and the probability that person B gets home in time for dinner are independent; that is, we wouldn’t expect one to have an affect on the other. But if a snow storm hits the city and introduces a probability C that traffic will be at a stand still, you would expect that the probability of both A getting home in time for dinner and B getting home in time for dinner, would change.
If this is a correct understanding, I guess I still don’t understand what exactly conditional independence is, or what it does for us (why does it have a separate name, as opposed to just compounded probabilities), and if this isn’t a correct understanding, could someone please provide an example with an explanation?
The scenario you describe provides a good example for conditional independence, though you haven’t quite described it as such. As the Wikipedia article puts it,
R and B are conditionally independent [given Y] if and only if,
given knowledge of whether Y occurs, knowledge of whether R occurs
provides no information on the likelihood of B occurring, and
knowledge of whether B occurs provides no information on the
likelihood of R occurring.
In this case, R and B are the events of persons A and B getting home in time for dinner, and Y is the event of a snow storm hitting the city. Certainly the probabilities of R and B will depend on whether Y occurs. However, just as it’s plausible to assume that if these two people have nothing to do with each other their probabilities of getting home in time are independent, it’s also plausible to assume that, while they will both have a lower probability of getting home in time if a snow storm hits, these lower probabilities will nevertheless still be independent of each other. That is, if you already know that a snow storm is raging and I tell you that person A is getting home late, that gives you no new information about whether person B is getting home late. You’re getting information on that from the fact that there’s a snow storm, but given that fact, the fact that A is getting home late doesn’t make it more or less likely that B is getting home late, too. So conditional independence is the same as normal independence, but restricted to the case where you know that a certain condition is or isn’t fulfilled. Not only can you not find out about A by finding out about B in general (normal independence), but you also can’t do so under the condition that there’s a snow storm (conditional independence).
An example of events that are independent but not conditionally independent would be: You randomly sample two people A and B from a large population and consider the probabilities that they will get home in time. Without any further knowledge, you might plausibly assume that these probabilities are independent. Now you introduce event Y, which occurs if the two people live in the same neighbourhood (however that might be defined). If you know that Y occurred and I tell you that A is getting home late, then that would tend to increase the probability that B is also getting home late, since they live in the same neighbourhood and any traffic-related causes of A getting home late might also delay B. So in this case the probabilities of A and B getting home in time are not conditionally independent given Y, since once you know that Y occurred, you are able to gain information about the probability of B getting home in time by finding out whether A is getting home in time.
Strictly speaking, this scenario only works if there’s always the same amount of traffic delay in the city overall and it just moves to different neighbourhoods. If that’s not the case, then it wouldn’t be correct to assume independence between the two probabilities, since the fact that one of the two is getting home late would already make it somewhat likelier that there’s heavy traffic in the city in general, even without knowing that they live in the same neighbourhood.
To give a precise example: Say you roll a blue die and a red die. The two results are independent of each other. Now you tell me that the blue result isn’t a 6 and the red result isn’t a 1. You’ve given me new information, but that hasn’t affected the independence of the results. By taking a look at the blue die, I can’t gain any knowledge about the red die; after I look at the blue die I will still have a probability of 1/5 for each number on the red die except 1. So the probabilities for the results are conditionally independent given the information you’ve given me. But if instead you tell me that the sum of the two results is even, this allows me to learn a lot about the red die by looking at the blue die. For instance, if I see a 3 on the blue die, the red die can only be 1, 3 or 5. So in this case the probabilities for the results are not conditionally independent given this other information that you’ve given me. This also underscores that conditional independence is always relative to the given condition — in this case, the results of the dice rolls are conditionally independent with respect to the event “the blue result is not 6 and the red result is not 1“, but they’re not conditionally independent with respect to the event “the sum of the results is even”.