Many cognitive biases don’t exist as such because many (most?) psychology results are wrong and heuristics and biases is no exception, entire methodologies (and thus entire sets of claimed biases) can be misguided. Don’t compartmentalize your knowledge of the many flaws of scientific research and especially psychology and neuroscience research. I’ve seen many people who are willing to link to studies that failed replication in support of a claim about the existence of a cognitive bias. Imagine someone did this for parapsychology! And parapsychology is generally more rigorous—even so it’s not enough to outweigh skepticism, because everyone knows that all psychology research is suspect, even research with apparently decent methodology and statistics. People are willing to lower their standards for H&B because “the world is mad” has become part of a political agenda justifying LessWrong’s claims to rationality. Don’t unevenly lower your standards, don’t be unevenly selective about what methodologies or results you’re willing to accept. Make that your claim to fame, not your supposed knowledge of biases.
On the meta level, you seem to be saying that none of them are worth learning. So, going down to the object level, do you deny that confirmation bias, hindsight bias, and anchoring are real effects? (to choose three ones I am particularly confident in)
I deny confirmation bias in many forms—it’s not consistently formulated, and this has been acknowledged as a serious problem, which is why Eliezer’s post on the subject is titled “Positive Bias”, a narrower concept. I don’t deny hindsight bias or anchoring. So of course I don’t assert that none of them are worth learning. That said, because it’s hard to tell which are real and which aren’t, one must be extremely careful.
Perhaps most importantly for LessWrong, I deny the kind of overconfidence bias found by the most popular studies of the subject: it simply disappears when you ask for frequencies rather than subjective probabilities, and brains naturally use frequencies.
That’d take time. I’ll say I’m generally skeptical of any results from Tversky, Kahneman, or Dawes. E.g., conjunction fallacy. Not that they don’t have good results, but they’re very spotty, and cited even when they’re out to lunch. I’ll add that I really like Gigerenzer, including his sharp critiques of Tversky and Kahneman’s naive “compare to an allegedly ideal Bayesian reasoner” approach. Gigerenzer has a very thorough knowledge of both H&B and statistics.
I’ll also note that oftentimes you don’t need strong empirical data to know a bias exists. E.g., many people have anecdotal experience with the planning fallacy, and I doubt anyone would deny the existence of the anchoring effect once it’d been brought to their attention. Of course, even then studies are helpful to know how strong the biases are. Often, though, I wish psychology just stopped using statistics, which sets up all kind of perverse incentives and methodological costs without adding much.
I’ll add that I really like Gigerenzer, including his sharp critiques of Tversky and Kahneman’s naive “compare to an allegedly ideal Bayesian reasoner” approach.
Here is my collection of critiques of Gigerenzer on Tversky & Kahneman. I side with Tversky & Kahneman on this one.
Ah, one meta thing to keep in mind, is that as a Bayesian it’s actually sort of hard to even understand what Gigerenzer could possibly mean sometimes. Gigerenzer understands and appreciates Bayesianism, so I knew I had to just be entirely missing something about what he was saying. E.g., I was shocked when he said that the conjunction rule wasn’t a fundamental rule of probability and only applied in certain cases. I mean, what? It falls directly out of the axioms! Nonetheless when I reread his arguments a few times I realized he actually had an important meta-statistical point. Anyway, that’s just one point to keep in mind when reading Gigerenzer, since you and I lean Bayesian. (As we should: Bayes really is a lot better.)
Would you prefer a discussion post, email, or a comment reply here? (I’ll want to write a long response that covers all the points all that once, at least about Gigerenzer etc.)
I can only see a little from your links, but what I do see misses Gigerenzer’s point. Kahneman (and Tversky’s?) replies to Gigerenzer also miss the point. Also note that some of Gigerenzer’s studies contradict some of Tversky and Kahneman’s results, or at least the conclusions that are frequently drawn from those results. E.g., overconfidence disappearing when you use frequencies instead of subjective probabilities. That said, I generally like Stanovich, so I’ll look closer at what he says specifically.
I should note, it’s really unfortunate that this word “normative” isn’t tabooed more.
Also, Dawes is often totally out to lunch—you’ve seen a few reasons why in a highly upvoted comment on one of your posts. Do you agree that Dawes and his cadre of researchers are not trustworthy? (Note that Eliezer often recommends Dawes’ book, “Rational Choice in an Uncertain World”. I read an ’80s edition of that book and was horrified at the poor scholarship. Before then I’d had a high opinion of H&B.)
I’m interested in our disagreement, in my view it seems pretty important, because it shapes our priors for how much respect we should give to the common man’s opinion. I’ll read more from your links (e.g. buy or steal a book or two) and give you my updated opinion.
I would need more details in order to comment on specific studies or results. Which passages from Dawes reflect poor scholarship? Which Gigerenzer studies contradict K&H results or conclusions (stated in which papers)? I also look forward to a more specific explanation of what you think Gigerenzer’s point is, and why the articles I linked to fail to address it.
(For those who are interested, I believe the highly-upvoted comment Will refers to is this one.)
Gigerenzer has a very thorough knowledge of both H&B and statistics.
Thanks. Can you recommend of a short primer of his (like a summary article)?
I’ll also note that oftentimes you don’t need strong empirical data to know a bias exists. E.g., many people have anecdotal experience with the planning fallacy, and I doubt anyone would deny the existence of the anchoring effect once it’d been brought to their attention.
That anecdotal experience is quite easy to quantify. There’s a reason the word “experiment” and “experience” are so similar.
Often, though, I wish psychology just stopped using statistics, which sets up all kind of perverse incentives and methodological costs without adding much.
I think the problem is not statistics but bad statistics. If data sharing, replication, and transparency of methods all continue to increase, I do expect most of psychology’s current problems will be vastly mitigated. But, that’s not the world we live in today.
Thanks. Can you recommend of a short primer of his (like a summary article)?
Check out fastandfrugal.com. For critiques of Kahneman, I don’t think there’s a single summary, just Google Scholar Gigerenzer Kahneman.
If data sharing, replication, and transparency of methods all continue to increase, I do expect most of psychology’s current problems will be vastly mitigated.
Is blind application of data-mining packages increasing or staying constant at this point? If increasing, do the good trends outweigh it?
Is blind application of data-mining packages increasing or staying constant at this point? If increasing, do the good trends outweigh it?
What is it about the blind application of data-mining packages that is not-good? (If it works for achieving the goals of the user more effectively than whatever they were doing before then good for them!)
I can’t tell if you’re making a joke or arguing that hand-applied statistical practices of amateurs are actually worse for truth-seekers than automated data-mining.
Was going for “ask a question in the hope of getting a literal answer”.
I don’t have much information about when data mining packages are used, how effective they are for those uses or what folks would have done if they had not used them.
I see. I don’t have any good resources for you, sadly.
I was essentially asking for your pure opinion/best guess. ie. An unpacking of what I infer were opinions/premises implied by “[not] good”. Nevermind. I’ll take it to be approximately “blind application of data-mining packages is worse than useless and gives worse outcomes than whatever they would or wouldn’t have done if they didn’t have the package”.
Sorry, I just don’t have a strong opinion. It’s hard for me to consider the counterfactual, because there’s lots of selection effects on what studies I see both from the present time and the time before software data-miners were popular.
Is blind application of data-mining packages increasing or staying constant at this point? If increasing, do the good trends outweigh it?
Good question and really hard to tell. Certainly it happens now! But I bet it happened in the past too. Whether data-sharing standards in publications has been rising is something that is observable (i.e., people saying what they did to the data), and I’d be willing to bet on it empirically getting better.
One more comment: statistics has a lot of problems even in theory. Using statistics to interpret how well people compare to an allegedly ideal statistical model, when you’re studying complex systems like brains, and when methodology is really hard, is also just a host of degrees of freedom along which to err or fuge the results.
I’ll put it again differently: I’m suspicious of all researchers who seem to have ideological axes to grind. Dawes clearly does, and many others seem to as well. Ideology plus incentives to spin and exaggerate just don’t tend to do good.
Many cognitive biases don’t exist as such because many (most?) psychology results are wrong and heuristics and biases is no exception, entire methodologies (and thus entire sets of claimed biases) can be misguided. Don’t compartmentalize your knowledge of the many flaws of scientific research and especially psychology and neuroscience research. I’ve seen many people who are willing to link to studies that failed replication in support of a claim about the existence of a cognitive bias. Imagine someone did this for parapsychology! And parapsychology is generally more rigorous—even so it’s not enough to outweigh skepticism, because everyone knows that all psychology research is suspect, even research with apparently decent methodology and statistics. People are willing to lower their standards for H&B because “the world is mad” has become part of a political agenda justifying LessWrong’s claims to rationality. Don’t unevenly lower your standards, don’t be unevenly selective about what methodologies or results you’re willing to accept. Make that your claim to fame, not your supposed knowledge of biases.
On the meta level, you seem to be saying that none of them are worth learning. So, going down to the object level, do you deny that confirmation bias, hindsight bias, and anchoring are real effects? (to choose three ones I am particularly confident in)
I deny confirmation bias in many forms—it’s not consistently formulated, and this has been acknowledged as a serious problem, which is why Eliezer’s post on the subject is titled “Positive Bias”, a narrower concept. I don’t deny hindsight bias or anchoring. So of course I don’t assert that none of them are worth learning. That said, because it’s hard to tell which are real and which aren’t, one must be extremely careful.
Perhaps most importantly for LessWrong, I deny the kind of overconfidence bias found by the most popular studies of the subject: it simply disappears when you ask for frequencies rather than subjective probabilities, and brains naturally use frequencies.
OK, fair. Thanks. If you want to run through the list and say which ones you do and don’t agree with, I’d find that helpful.
That’d take time. I’ll say I’m generally skeptical of any results from Tversky, Kahneman, or Dawes. E.g., conjunction fallacy. Not that they don’t have good results, but they’re very spotty, and cited even when they’re out to lunch. I’ll add that I really like Gigerenzer, including his sharp critiques of Tversky and Kahneman’s naive “compare to an allegedly ideal Bayesian reasoner” approach. Gigerenzer has a very thorough knowledge of both H&B and statistics.
I’ll also note that oftentimes you don’t need strong empirical data to know a bias exists. E.g., many people have anecdotal experience with the planning fallacy, and I doubt anyone would deny the existence of the anchoring effect once it’d been brought to their attention. Of course, even then studies are helpful to know how strong the biases are. Often, though, I wish psychology just stopped using statistics, which sets up all kind of perverse incentives and methodological costs without adding much.
Here is my collection of critiques of Gigerenzer on Tversky & Kahneman. I side with Tversky & Kahneman on this one.
Ah, one meta thing to keep in mind, is that as a Bayesian it’s actually sort of hard to even understand what Gigerenzer could possibly mean sometimes. Gigerenzer understands and appreciates Bayesianism, so I knew I had to just be entirely missing something about what he was saying. E.g., I was shocked when he said that the conjunction rule wasn’t a fundamental rule of probability and only applied in certain cases. I mean, what? It falls directly out of the axioms! Nonetheless when I reread his arguments a few times I realized he actually had an important meta-statistical point. Anyway, that’s just one point to keep in mind when reading Gigerenzer, since you and I lean Bayesian. (As we should: Bayes really is a lot better.)
Okay. I also look forward to hearing what specific meta-statistical point you think Gigerenzer was making.
Would you prefer a discussion post, email, or a comment reply here? (I’ll want to write a long response that covers all the points all that once, at least about Gigerenzer etc.)
Discussion post, I suppose.
Has this happened yet? I didn’t miss it, right?
Correct, I’ll be sure to let you know when it happens.
I can only see a little from your links, but what I do see misses Gigerenzer’s point. Kahneman (and Tversky’s?) replies to Gigerenzer also miss the point. Also note that some of Gigerenzer’s studies contradict some of Tversky and Kahneman’s results, or at least the conclusions that are frequently drawn from those results. E.g., overconfidence disappearing when you use frequencies instead of subjective probabilities. That said, I generally like Stanovich, so I’ll look closer at what he says specifically.
I should note, it’s really unfortunate that this word “normative” isn’t tabooed more.
Also, Dawes is often totally out to lunch—you’ve seen a few reasons why in a highly upvoted comment on one of your posts. Do you agree that Dawes and his cadre of researchers are not trustworthy? (Note that Eliezer often recommends Dawes’ book, “Rational Choice in an Uncertain World”. I read an ’80s edition of that book and was horrified at the poor scholarship. Before then I’d had a high opinion of H&B.)
I’m interested in our disagreement, in my view it seems pretty important, because it shapes our priors for how much respect we should give to the common man’s opinion. I’ll read more from your links (e.g. buy or steal a book or two) and give you my updated opinion.
I would need more details in order to comment on specific studies or results. Which passages from Dawes reflect poor scholarship? Which Gigerenzer studies contradict K&H results or conclusions (stated in which papers)? I also look forward to a more specific explanation of what you think Gigerenzer’s point is, and why the articles I linked to fail to address it.
(For those who are interested, I believe the highly-upvoted comment Will refers to is this one.)
Thanks. Can you recommend of a short primer of his (like a summary article)?
That anecdotal experience is quite easy to quantify. There’s a reason the word “experiment” and “experience” are so similar.
I think the problem is not statistics but bad statistics. If data sharing, replication, and transparency of methods all continue to increase, I do expect most of psychology’s current problems will be vastly mitigated. But, that’s not the world we live in today.
Check out fastandfrugal.com. For critiques of Kahneman, I don’t think there’s a single summary, just Google Scholar Gigerenzer Kahneman.
Is blind application of data-mining packages increasing or staying constant at this point? If increasing, do the good trends outweigh it?
What is it about the blind application of data-mining packages that is not-good? (If it works for achieving the goals of the user more effectively than whatever they were doing before then good for them!)
I can’t tell if you’re making a joke or arguing that hand-applied statistical practices of amateurs are actually worse for truth-seekers than automated data-mining.
Was going for “ask a question in the hope of getting a literal answer”.
I don’t have much information about when data mining packages are used, how effective they are for those uses or what folks would have done if they had not used them.
I see. I don’t have any good resources for you, sadly. I’d ask gwern.
I was essentially asking for your pure opinion/best guess. ie. An unpacking of what I infer were opinions/premises implied by “[not] good”. Nevermind. I’ll take it to be approximately “blind application of data-mining packages is worse than useless and gives worse outcomes than whatever they would or wouldn’t have done if they didn’t have the package”.
Sorry, I just don’t have a strong opinion. It’s hard for me to consider the counterfactual, because there’s lots of selection effects on what studies I see both from the present time and the time before software data-miners were popular.
Good question and really hard to tell. Certainly it happens now! But I bet it happened in the past too. Whether data-sharing standards in publications has been rising is something that is observable (i.e., people saying what they did to the data), and I’d be willing to bet on it empirically getting better.
One more comment: statistics has a lot of problems even in theory. Using statistics to interpret how well people compare to an allegedly ideal statistical model, when you’re studying complex systems like brains, and when methodology is really hard, is also just a host of degrees of freedom along which to err or fuge the results.
I’ll put it again differently: I’m suspicious of all researchers who seem to have ideological axes to grind. Dawes clearly does, and many others seem to as well. Ideology plus incentives to spin and exaggerate just don’t tend to do good.