If we know that there’s a burglar, then we think that either an alarm or a recession caused it; and if we’re told that there’s an alarm, we’d conclude it was less likely that there was a recession, since the recession had been explained away.
Is this to say that a given node/observation/fact can only have one cause?
More concretely, lets say we have nodes x, y, and z, with causation arrows from x to z and from y to z.
.X...........Y
...\......./
.......Z
If z is just an “and” logic gate, that outputs a “True” value only when x is True and y is True, then it seems like it must be caused by both x and y.
Am I mixing up my abstractions here? Is there some reason why logic gate-like rules are disallowed by causal models?
Logic gates are allowed just fine. For example, if burglars and earthquakes both cause alarms, then A=OR(B,E). You could also have AND, or any other imaginable way of combining the variables.
The “explained away” thing isn’t worded very well. For example, imagine that B and E are independent and have probabilities equal to 1⁄5. Then learning that there was an alarm (A) raises your probabilities of both B and E to 5⁄9, but then learning that there was a earthquake (E) lowers your probability of burglar (B) back to 1⁄5. That’s the “explained away” effect. With other logic gates you’d see other effects.
Is this to say that a given node/observation/fact can only have one cause?
More concretely, lets say we have nodes x, y, and z, with causation arrows from x to z and from y to z.
.X...........Y
...\......./
.......Z
If z is just an “and” logic gate, that outputs a “True” value only when x is True and y is True, then it seems like it must be caused by both x and y.
Am I mixing up my abstractions here? Is there some reason why logic gate-like rules are disallowed by causal models?
Logic gates are allowed just fine. For example, if burglars and earthquakes both cause alarms, then A=OR(B,E). You could also have AND, or any other imaginable way of combining the variables.
The “explained away” thing isn’t worded very well. For example, imagine that B and E are independent and have probabilities equal to 1⁄5. Then learning that there was an alarm (A) raises your probabilities of both B and E to 5⁄9, but then learning that there was a earthquake (E) lowers your probability of burglar (B) back to 1⁄5. That’s the “explained away” effect. With other logic gates you’d see other effects.