One thinks particularly of Robyn Dawes—I don’t know him from “evidence-based medicine” per se, but I know he was fighting the battle to get doctors to acknowledge that their “clinical experience” wasn’t better than simple linear models, and he was on the front lines against psychotherapy shown to perform no better than talking to any bright person.
If you read “Rational Choice in an Uncertain World” you will see that Dawes is pretty definitely on the level of “integrate Bayes into everyday life”, not just Traditional Rationality. I don’t know about the historical origins of evidence-based medicine, so it’s possible that a bunch of Traditional Rationalists invented it; but one does get the impression that probability theorists trying to get people to listen to the research about the limits of their own minds, were involved.
Those studies sucked. That book had tons of fallacious reasoning and questionable results. It was while reading Dawes’ book that I became convinced that H&B is actively harmful for rationality. Now that you say Dawes was also behind the anti-psychotherapy stuff I suddenly have a lot more faith in psychotherapy. (By the way, it’s not just that Dawes isn’t a careful researcher—he can also be actively misleading.)
I really hope Anna is right that the Center for Modern Rationality won’t be giving much weight to oft-cited overblown H&B results (e.g “confirmation bias”). Knowing about biases almost always hurts people.
ETA: Apologies for curmudgeonly tone; I’m just worried that unless executed with utmost care, the CMR idea will do evil.
That is an important heuristic (and upvoted), but I don’t think it’s one we should endorse without some pretty substantial caveats. If you deprecate any results that strike you as ideologically tainted, and your criteria for “ideologically tainted” are themselves skewed in one direction or another by identity effects, you can easily end up accepting less accurate information than you would by taking every result in the field at face value.
Agreed. I think your caveat is just a special case: put minimal trust in researchers who seem to have ideological axes to grind, including yourself. (And if you can’t discern when you might be grinding an axe then you’re probably screwed anyway.) (But yeah, I admit it’s a perversion of “researchers” to include meta-researchers.)
I think the claim that “those studies sucked” and the accompanying link were in reference to:
the battle to get doctors to acknowledge that their “clinical experience” wasn’t better than simple linear models
The linked comment discusses a few different statistical prediction rules, not just wine-tasting. To the extent that the comment identifies systematic flaws in claims that linear models outperform experts, it does somewhat support the claim that “those studies sucked” (though I wouldn’t think it supports the claim sufficiently to actually justify making it).
(See Steven’s comment, the “those studies sucked” comment was meant to be a reference to the linear model versus expert judgment series, not the psychotherapy studies. Obviously the link was supposed to be representative of a disturbing trend, not the sum total justification for my claims.)
FWIW I still like a lot of H&B research—I’m a big Gigerenzer fan, and Tetlock has some cool stuff, for example—but most of the field, including much of Tversky and Kahneman’s stuff, is hogwash, i.e. less trustworthy than parapsychology results (which are generally held to a much higher standard). This is what we’d expect given the state of the social sciences, but for some reason people seem to give social psychology and cognitive science a free pass rather than applying a healthy dose of skepticism. I suspect this is because of confirmation bias: people are already trying to push an ideology about how almost everyone is irrational and the world is mad, and thus are much more willing to accept “explanations” that support this conclusion.
Start by reading Gigerenzer’s critiques. E.g. I really like the study on how overconfidence goes away if you ask for frequencies rather than subjective probabilities—this actually gives you a rationality technique that you can apply in real life! (In my experience it works, but that’s an impression, not a statistical finding.) I also quite liked his point about how just telling subjects to assume random sampling is misleading. You can find a summary of two of his critiques in a LW post by Kaj Sotala, “Heuristics and Biases Biases?” or summat. Also fastandfrugal.com should have some links or links to links. Also worth noting is that Gigerenzer’s been cited many thousands of times and has written a few popular books. I especially like Gigerenzer because unlike many H&B folk he has a thorough knowledge of statistics, and he uses that knowledge to make very sharp critiques of Kahneman’s compare-to-allegedly-ideal-Bayesian-reasoner approach. (Of course it’s still possible to use a Bayesian approach, but the most convincing Bayesian papers I’ve seen were sophisticated (e.g. didn’t skimp on information theory) and applied only to very simple problems.)
I wouldn’t even say that the problem is overall in the H&B lit, it’s just that lots of H&B folk spin their results as if they somehow applied to real life situations. It’s intellectually dishonest, and leads to people like Eliezer having massive overconfidence in the relevance of H&B knowledge for personal rationality.
Those studies sucked. That book had tons of fallacious reasoning and questionable results. It was while reading Dawes’ book that I became convinced that H&B is actively harmful for rationality. Now that you say Dawes was also behind the anti-psychotherapy stuff I suddenly have a lot more faith in psychotherapy. (By the way, it’s not just that Dawes isn’t a careful researcher—he can also be actively misleading.)
I really hope Anna is right that the Center for Modern Rationality won’t be giving much weight to oft-cited overblown H&B results (e.g “confirmation bias”). Knowing about biases almost always hurts people.
ETA: Apologies for curmudgeonly tone; I’m just worried that unless executed with utmost care, the CMR idea will do evil.
Which illustrates an important heuristic: put minimal trust in researchers who seem to have ideological axes to grind.
That is an important heuristic (and upvoted), but I don’t think it’s one we should endorse without some pretty substantial caveats. If you deprecate any results that strike you as ideologically tainted, and your criteria for “ideologically tainted” are themselves skewed in one direction or another by identity effects, you can easily end up accepting less accurate information than you would by taking every result in the field at face value.
I probably don’t need to give any examples.
Agreed. I think your caveat is just a special case: put minimal trust in researchers who seem to have ideological axes to grind, including yourself. (And if you can’t discern when you might be grinding an axe then you’re probably screwed anyway.) (But yeah, I admit it’s a perversion of “researchers” to include meta-researchers.)
As The Last Psychiatrist would say, always be thinking about what the author wants to be true.
FYI to other readers: Citation does not support claim, it’s about linear models of wine-tasting rather than experimental support for psychotherapy.
I think the claim that “those studies sucked” and the accompanying link were in reference to:
The linked comment discusses a few different statistical prediction rules, not just wine-tasting. To the extent that the comment identifies systematic flaws in claims that linear models outperform experts, it does somewhat support the claim that “those studies sucked” (though I wouldn’t think it supports the claim sufficiently to actually justify making it).
(See Steven’s comment, the “those studies sucked” comment was meant to be a reference to the linear model versus expert judgment series, not the psychotherapy studies. Obviously the link was supposed to be representative of a disturbing trend, not the sum total justification for my claims.)
FWIW I still like a lot of H&B research—I’m a big Gigerenzer fan, and Tetlock has some cool stuff, for example—but most of the field, including much of Tversky and Kahneman’s stuff, is hogwash, i.e. less trustworthy than parapsychology results (which are generally held to a much higher standard). This is what we’d expect given the state of the social sciences, but for some reason people seem to give social psychology and cognitive science a free pass rather than applying a healthy dose of skepticism. I suspect this is because of confirmation bias: people are already trying to push an ideology about how almost everyone is irrational and the world is mad, and thus are much more willing to accept “explanations” that support this conclusion.
Tversky and Kahneman, hogwash? What? Can you explain? Or just mention something?
Start by reading Gigerenzer’s critiques. E.g. I really like the study on how overconfidence goes away if you ask for frequencies rather than subjective probabilities—this actually gives you a rationality technique that you can apply in real life! (In my experience it works, but that’s an impression, not a statistical finding.) I also quite liked his point about how just telling subjects to assume random sampling is misleading. You can find a summary of two of his critiques in a LW post by Kaj Sotala, “Heuristics and Biases Biases?” or summat. Also fastandfrugal.com should have some links or links to links. Also worth noting is that Gigerenzer’s been cited many thousands of times and has written a few popular books. I especially like Gigerenzer because unlike many H&B folk he has a thorough knowledge of statistics, and he uses that knowledge to make very sharp critiques of Kahneman’s compare-to-allegedly-ideal-Bayesian-reasoner approach. (Of course it’s still possible to use a Bayesian approach, but the most convincing Bayesian papers I’ve seen were sophisticated (e.g. didn’t skimp on information theory) and applied only to very simple problems.)
I wouldn’t even say that the problem is overall in the H&B lit, it’s just that lots of H&B folk spin their results as if they somehow applied to real life situations. It’s intellectually dishonest, and leads to people like Eliezer having massive overconfidence in the relevance of H&B knowledge for personal rationality.
Awesome, big thanks!
All this rationality organizing talk has to have some misquotes :(