It is easier to imagine the rest of the universe being just as it is if a patient took pill A rather than pill B than it is trying to imagine what else in the universe would have had to be different if the temperature yesterday had been 30 degrees rather than 40. It may be the case that human actions, seem sufficiently free that we have an easier time imagining only one specific action being different, and nothing else.
(T. Vanderweele, “Explanation in causal Inference” p. 453-455) – Quoted in J. Pearl, Blog post “Causation without Manipulation”
I recognize the idea of the quote, but am wondering how true it really is, and why this would be so. If the pill counterfactual seems easier than the weather counterfactual, is this due to something fundamentally important about complexity or is it just an illusion?
I ask this because when I try to think about the details of each scenario, the pill example ceases to seem comparatively simple. Asking questions like “how does pill A get to pill B’s location?” presents to me as many difficulties as asking “how did that heat get to that location”? So maybe the difference lies in the fact that we tend to not look at details when evaluating counterfactuals relevant to human decisions?
I think it does have to do with both complexity and recognition of complexity—I suspect that the book goes into much more detail, but it’s expensive, so I don’t have it.
(T. Vanderweele, “Explanation in causal Inference” p. 453-455) – Quoted in J. Pearl, Blog post “Causation without Manipulation”
I recognize the idea of the quote, but am wondering how true it really is, and why this would be so. If the pill counterfactual seems easier than the weather counterfactual, is this due to something fundamentally important about complexity or is it just an illusion?
I ask this because when I try to think about the details of each scenario, the pill example ceases to seem comparatively simple. Asking questions like “how does pill A get to pill B’s location?” presents to me as many difficulties as asking “how did that heat get to that location”? So maybe the difference lies in the fact that we tend to not look at details when evaluating counterfactuals relevant to human decisions?
I think it does have to do with both complexity and recognition of complexity—I suspect that the book goes into much more detail, but it’s expensive, so I don’t have it.
Just to clarify, this is a serious issue when doing counterfactual analysis; we can’t easily estimate many counterfactuals because of this bias.