When you condition on a particular outcome for X, it affects your probabilities for every other variable that’s causally related to X, in either direction.
You gain information about variables that are causally downstream from X (its “effects”). Like, if you imagine setting X=x and then “playing the tape forward,” you’ll see the sorts of events that tend to follow from X=x and not those that tend to follow from some other outcome X=x′.
And, you gain information about variables that are causally upstream from X (its “causes”). If you know that X=x, then the causes of X must have “added up to” that outcome for X. You can rule out any configuration of the causes that doesn’t “add up to” causing X=x, and that affects your probability distributions for all of these causative variables.
When you use the do-operator to set X to a particular outcome for X, it only affects your probabilities for the “effects” of X, not the “causes.” (The first sub-bullet above, not the second.)
For example, suppose hypothetically that I cook dinner every evening. And this process consists of these steps in order:
“W”: considering what ingredients I have in the house
“X”: deciding on a particular meal to make, and cooking it
“Y”: eating the food
“Z”: taking a moment after the meal to take stock of the ingredients left in the kitchen
Some days I have lots of ingredients, and I prepare elaborate dinners. Other days I don’t, and I make simple and easy dinners.
Now, suppose that on one particular evening, I am making instant ramen (X=makinginstantramen). We’re given no other info about this evening, but we know this.
What can we conclude from this? A lot, it turns out:
In Y, I’ll be eating instant ramen, not something else.
In W, I probably didn’t have many ingredients in the house. Otherwise I would have made something more elaborate.
In Z, I probably don’t see many ingredients on the shelves (a result of what we know about W).
This is what happens when we conditiononX=makinginstantramen.
If instead we apply the do-operator to X=makinginstantramen, then:
We learn nothing about W, and from our POV it is still a sample from the original unconditional distribution for W.
We can still conclude that I’ll be eating ramen afterwards, in Y.
We know very little about Z (the post-meal ingredient survey) for the same reason we know nothing about W.
Concretely, this models a situation where I first survey my ingredients like usual, and am then forced to make instant ramen by some force outside the universe (i.e. outside our W/X/Y/Z causal diagram).
And this is a useful concept, because we often want to know what would happen if we performed just such an intervention!
That is, we want to know whether it’s a good idea to add a new cause to the diagram, forcing some variable to have values we think lead to good outcomes.
To understand what would happen in such an intervention, it’s wrong to condition on the outcome using the original, unmodified diagram – if we did that, we’d draw conclusions like “forcing me to make instant ramen would cause me to see relatively few ingredients on the shelves later, after dinner.”
AFAIK the distinction is that:
When you condition on a particular outcome for X, it affects your probabilities for every other variable that’s causally related to X, in either direction.
You gain information about variables that are causally downstream from X (its “effects”). Like, if you imagine setting X=x and then “playing the tape forward,” you’ll see the sorts of events that tend to follow from X=x and not those that tend to follow from some other outcome X=x′.
And, you gain information about variables that are causally upstream from X (its “causes”). If you know that X=x, then the causes of X must have “added up to” that outcome for X. You can rule out any configuration of the causes that doesn’t “add up to” causing X=x, and that affects your probability distributions for all of these causative variables.
When you use the do-operator to set X to a particular outcome for X, it only affects your probabilities for the “effects” of X, not the “causes.” (The first sub-bullet above, not the second.)
For example, suppose hypothetically that I cook dinner every evening. And this process consists of these steps in order:
“W”: considering what ingredients I have in the house
“X”: deciding on a particular meal to make, and cooking it
“Y”: eating the food
“Z”: taking a moment after the meal to take stock of the ingredients left in the kitchen
Some days I have lots of ingredients, and I prepare elaborate dinners. Other days I don’t, and I make simple and easy dinners.
Now, suppose that on one particular evening, I am making instant ramen (X=making instant ramen). We’re given no other info about this evening, but we know this.
What can we conclude from this? A lot, it turns out:
In Y, I’ll be eating instant ramen, not something else.
In W, I probably didn’t have many ingredients in the house. Otherwise I would have made something more elaborate.
In Z, I probably don’t see many ingredients on the shelves (a result of what we know about W).
This is what happens when we condition on X=making instant ramen.
If instead we apply the do-operator to X=making instant ramen, then:
We learn nothing about W, and from our POV it is still a sample from the original unconditional distribution for W.
We can still conclude that I’ll be eating ramen afterwards, in Y.
We know very little about Z (the post-meal ingredient survey) for the same reason we know nothing about W.
Concretely, this models a situation where I first survey my ingredients like usual, and am then forced to make instant ramen by some force outside the universe (i.e. outside our W/X/Y/Z causal diagram).
And this is a useful concept, because we often want to know what would happen if we performed just such an intervention!
That is, we want to know whether it’s a good idea to add a new cause to the diagram, forcing some variable to have values we think lead to good outcomes.
To understand what would happen in such an intervention, it’s wrong to condition on the outcome using the original, unmodified diagram – if we did that, we’d draw conclusions like “forcing me to make instant ramen would cause me to see relatively few ingredients on the shelves later, after dinner.”