I think this overstates the problems. Causation is a model. All models are wrong. Some models are useful.
For #1, it’s well-understood that pure correlation does not tell us anything about causality. Perhaps health determines excercise with a pi/2 lag, rather than the reverse. For most real-world phenomena, the graphs aren’t that regular nor repeating, so there are more hints about direction and lag. Additionally, there are often “natural experiments”, where an intervention changes one, and we can then see the directional correlation, which is a pretty strong hint to causation.
#2 is a case of incomplete rather than incorrect causation. This is true for any causal model—one can ask further upstream. X causes Y. What causes X? What causes the cause of X? In the pure reduction, the only cause of anything is the quantum state of the universe.
#3 is a communication failure—we forgot to say “compared to what” when we say “increases” risk of death. If we instead said that “intentionally jumping out of a plane carries a 0.0060% risk of death”, that’s clearer. It doesn’t matter that crossing the street is more dangerous.
For most real-world phenomena, the graphs aren’t that regular nor repeating, so there are more hints about direction and lag.
Yeah, though I think “at fourth glance” stands as it is: in the long run any bounded function will have zero correlation with its derivative.
#3 is a communication failure—we forgot to say “compared to what” when we say “increases” risk of death.
Compared to the control group. People often measure the effect of variable X on variable Y by randomly dividing a population into experiment and control groups, intervening on X in the experiment group, and measuring the difference in Y between groups. Well, I tried to show an example where intervening on X in either direction will increase Y.
I think this overstates the problems. Causation is a model. All models are wrong. Some models are useful.
For #1, it’s well-understood that pure correlation does not tell us anything about causality. Perhaps health determines excercise with a pi/2 lag, rather than the reverse. For most real-world phenomena, the graphs aren’t that regular nor repeating, so there are more hints about direction and lag. Additionally, there are often “natural experiments”, where an intervention changes one, and we can then see the directional correlation, which is a pretty strong hint to causation.
#2 is a case of incomplete rather than incorrect causation. This is true for any causal model—one can ask further upstream. X causes Y. What causes X? What causes the cause of X? In the pure reduction, the only cause of anything is the quantum state of the universe.
#3 is a communication failure—we forgot to say “compared to what” when we say “increases” risk of death. If we instead said that “intentionally jumping out of a plane carries a 0.0060% risk of death”, that’s clearer. It doesn’t matter that crossing the street is more dangerous.
Yeah, though I think “at fourth glance” stands as it is: in the long run any bounded function will have zero correlation with its derivative.
Compared to the control group. People often measure the effect of variable X on variable Y by randomly dividing a population into experiment and control groups, intervening on X in the experiment group, and measuring the difference in Y between groups. Well, I tried to show an example where intervening on X in either direction will increase Y.