I found the article rather confused. He begins by criticising the slogan as over-used, but by the end says that we do need to distinguish correlation from causation and the problem with the slogan is that it’s just a slogan. His history of the idea ends in the 1940s, and he appears completely unaware of the work that has been done on this issue by Judea Pearl and others over the last twenty years—unaware that there is indeed more, much more, than just a slogan. Even the basic idea of performing interventions to detect causality is missing. The same superficiality applies to the other issue he covers, of distinguishing statistical significance from importance.
I’d post a comment at the Slate article to that effect, but the comment button doesn’t seem to do anything.
ETA: Googling /correlation causation/ doesn’t easily bring the modern work to light either. The first hit is the Wikipedia article on the slogan, which actually does have a reference to Pearl, but only in passing. Second is the xkcd about correlation waggling its eyebrows suggestively, third is another superficial article on stats.org, fourth is a link to the Slate article, and fifth is the Slate article itself. Further down is RationalWiki’s take on it, which briefly mentions interventions as the way to detect causality but I think not prominently enough. One has to get to the Wikipedia page on causality to find the meat of the matter.
I have a lot of sympathy for the article, though I agree it’s not very focused. In my experience, “correlation does not imply causation” is mostly used as some sort of magical talisman in discussion, wheeled out by people who don’t really understand it in the hope that it may do something.
I’ve been considering writing a discussion post on similar rhetorical talismans, but I’m not sure how on-topic it would end up being.
I think I have a pretty good idea of when I’m doing it. It’s a similar sensation to guessing the teacher’s password; that ‘I don’t really understand this, but I’m going to try it anyway to see if it works’ feeling.
Also, isn’t your ETA something we can fix? The search term “what does imply causation” (and variations thereof) clearly isn’t subject to a lot of competition. I’m half-tempted to do it myself.
Someone (preferably an expert) could work on the Wiki article, and LessWrong already has a lot of stuff on Pearl-style causal reasoning, but beyond that, it’s a matter of the reception of these ideas in the statistical community, which is up to them, and I don’t know anything about anyway. Do we have any statisticians here (IlyaShpitser?) who can say what the current state of things is? Is modern causal analysis routinely practiced in statistical enquiries? Is it taught to undergraduates in statistics, or do statistics courses go no further on the subject than the randomised controlled trial?
Good questions. The history of causality in statistics is very complicated (partly due to the attitudes of big names like Fisher). There was one point not too long ago when people could not publish causality research in statistics journals as it was considered a “separate magisterium” (!). People who had something interesting to say about causality in statistics journals had to recast it as missing data problems.
All that is changing—somewhat. There were many many talks on causality at JSM this year, and the trend is set to continue. The set of people who is aware of what the g-formula is, or ignorability is, for example, is certainly much larger than 20 years ago.
As for what “proper causal analysis” is—there is some controversy here, and unsurprisingly the causal inference field splits up into camps (counterfactual vs not, graphs vs not, untestable assumptions vs not, etc.) It’s a bit like (http://xkcd.com/1095/).
I found the article rather confused. He begins by criticising the slogan as over-used, but by the end says that we do need to distinguish correlation from causation and the problem with the slogan is that it’s just a slogan. His history of the idea ends in the 1940s, and he appears completely unaware of the work that has been done on this issue by Judea Pearl and others over the last twenty years—unaware that there is indeed more, much more, than just a slogan. Even the basic idea of performing interventions to detect causality is missing. The same superficiality applies to the other issue he covers, of distinguishing statistical significance from importance.
I’d post a comment at the Slate article to that effect, but the comment button doesn’t seem to do anything.
ETA: Googling /correlation causation/ doesn’t easily bring the modern work to light either. The first hit is the Wikipedia article on the slogan, which actually does have a reference to Pearl, but only in passing. Second is the xkcd about correlation waggling its eyebrows suggestively, third is another superficial article on stats.org, fourth is a link to the Slate article, and fifth is the Slate article itself. Further down is RationalWiki’s take on it, which briefly mentions interventions as the way to detect causality but I think not prominently enough. One has to get to the Wikipedia page on causality to find the meat of the matter.
I have a lot of sympathy for the article, though I agree it’s not very focused. In my experience, “correlation does not imply causation” is mostly used as some sort of magical talisman in discussion, wheeled out by people who don’t really understand it in the hope that it may do something.
I’ve been considering writing a discussion post on similar rhetorical talismans, but I’m not sure how on-topic it would end up being.
I would like to see an article which advised you on how you could:
Recognize when you are using such a talisman, and/or
Induce thought in someone else using such a talisman.
I think I have a pretty good idea of when I’m doing it. It’s a similar sensation to guessing the teacher’s password; that ‘I don’t really understand this, but I’m going to try it anyway to see if it works’ feeling.
This is my view as well.
Also, isn’t your ETA something we can fix? The search term “what does imply causation” (and variations thereof) clearly isn’t subject to a lot of competition. I’m half-tempted to do it myself.
Someone (preferably an expert) could work on the Wiki article, and LessWrong already has a lot of stuff on Pearl-style causal reasoning, but beyond that, it’s a matter of the reception of these ideas in the statistical community, which is up to them, and I don’t know anything about anyway. Do we have any statisticians here (IlyaShpitser?) who can say what the current state of things is? Is modern causal analysis routinely practiced in statistical enquiries? Is it taught to undergraduates in statistics, or do statistics courses go no further on the subject than the randomised controlled trial?
Good questions. The history of causality in statistics is very complicated (partly due to the attitudes of big names like Fisher). There was one point not too long ago when people could not publish causality research in statistics journals as it was considered a “separate magisterium” (!). People who had something interesting to say about causality in statistics journals had to recast it as missing data problems.
All that is changing—somewhat. There were many many talks on causality at JSM this year, and the trend is set to continue. The set of people who is aware of what the g-formula is, or ignorability is, for example, is certainly much larger than 20 years ago.
As for what “proper causal analysis” is—there is some controversy here, and unsurprisingly the causal inference field splits up into camps (counterfactual vs not, graphs vs not, untestable assumptions vs not, etc.) It’s a bit like (http://xkcd.com/1095/).
(see here)