I think the claim that “those studies sucked” and the accompanying link were in reference to:
the battle to get doctors to acknowledge that their “clinical experience” wasn’t better than simple linear models
The linked comment discusses a few different statistical prediction rules, not just wine-tasting. To the extent that the comment identifies systematic flaws in claims that linear models outperform experts, it does somewhat support the claim that “those studies sucked” (though I wouldn’t think it supports the claim sufficiently to actually justify making it).
(See Steven’s comment, the “those studies sucked” comment was meant to be a reference to the linear model versus expert judgment series, not the psychotherapy studies. Obviously the link was supposed to be representative of a disturbing trend, not the sum total justification for my claims.)
FWIW I still like a lot of H&B research—I’m a big Gigerenzer fan, and Tetlock has some cool stuff, for example—but most of the field, including much of Tversky and Kahneman’s stuff, is hogwash, i.e. less trustworthy than parapsychology results (which are generally held to a much higher standard). This is what we’d expect given the state of the social sciences, but for some reason people seem to give social psychology and cognitive science a free pass rather than applying a healthy dose of skepticism. I suspect this is because of confirmation bias: people are already trying to push an ideology about how almost everyone is irrational and the world is mad, and thus are much more willing to accept “explanations” that support this conclusion.
Start by reading Gigerenzer’s critiques. E.g. I really like the study on how overconfidence goes away if you ask for frequencies rather than subjective probabilities—this actually gives you a rationality technique that you can apply in real life! (In my experience it works, but that’s an impression, not a statistical finding.) I also quite liked his point about how just telling subjects to assume random sampling is misleading. You can find a summary of two of his critiques in a LW post by Kaj Sotala, “Heuristics and Biases Biases?” or summat. Also fastandfrugal.com should have some links or links to links. Also worth noting is that Gigerenzer’s been cited many thousands of times and has written a few popular books. I especially like Gigerenzer because unlike many H&B folk he has a thorough knowledge of statistics, and he uses that knowledge to make very sharp critiques of Kahneman’s compare-to-allegedly-ideal-Bayesian-reasoner approach. (Of course it’s still possible to use a Bayesian approach, but the most convincing Bayesian papers I’ve seen were sophisticated (e.g. didn’t skimp on information theory) and applied only to very simple problems.)
I wouldn’t even say that the problem is overall in the H&B lit, it’s just that lots of H&B folk spin their results as if they somehow applied to real life situations. It’s intellectually dishonest, and leads to people like Eliezer having massive overconfidence in the relevance of H&B knowledge for personal rationality.
FYI to other readers: Citation does not support claim, it’s about linear models of wine-tasting rather than experimental support for psychotherapy.
I think the claim that “those studies sucked” and the accompanying link were in reference to:
The linked comment discusses a few different statistical prediction rules, not just wine-tasting. To the extent that the comment identifies systematic flaws in claims that linear models outperform experts, it does somewhat support the claim that “those studies sucked” (though I wouldn’t think it supports the claim sufficiently to actually justify making it).
(See Steven’s comment, the “those studies sucked” comment was meant to be a reference to the linear model versus expert judgment series, not the psychotherapy studies. Obviously the link was supposed to be representative of a disturbing trend, not the sum total justification for my claims.)
FWIW I still like a lot of H&B research—I’m a big Gigerenzer fan, and Tetlock has some cool stuff, for example—but most of the field, including much of Tversky and Kahneman’s stuff, is hogwash, i.e. less trustworthy than parapsychology results (which are generally held to a much higher standard). This is what we’d expect given the state of the social sciences, but for some reason people seem to give social psychology and cognitive science a free pass rather than applying a healthy dose of skepticism. I suspect this is because of confirmation bias: people are already trying to push an ideology about how almost everyone is irrational and the world is mad, and thus are much more willing to accept “explanations” that support this conclusion.
Tversky and Kahneman, hogwash? What? Can you explain? Or just mention something?
Start by reading Gigerenzer’s critiques. E.g. I really like the study on how overconfidence goes away if you ask for frequencies rather than subjective probabilities—this actually gives you a rationality technique that you can apply in real life! (In my experience it works, but that’s an impression, not a statistical finding.) I also quite liked his point about how just telling subjects to assume random sampling is misleading. You can find a summary of two of his critiques in a LW post by Kaj Sotala, “Heuristics and Biases Biases?” or summat. Also fastandfrugal.com should have some links or links to links. Also worth noting is that Gigerenzer’s been cited many thousands of times and has written a few popular books. I especially like Gigerenzer because unlike many H&B folk he has a thorough knowledge of statistics, and he uses that knowledge to make very sharp critiques of Kahneman’s compare-to-allegedly-ideal-Bayesian-reasoner approach. (Of course it’s still possible to use a Bayesian approach, but the most convincing Bayesian papers I’ve seen were sophisticated (e.g. didn’t skimp on information theory) and applied only to very simple problems.)
I wouldn’t even say that the problem is overall in the H&B lit, it’s just that lots of H&B folk spin their results as if they somehow applied to real life situations. It’s intellectually dishonest, and leads to people like Eliezer having massive overconfidence in the relevance of H&B knowledge for personal rationality.
Awesome, big thanks!
All this rationality organizing talk has to have some misquotes :(