There a variety of issues going on here. Manfred pointed out many of them. There’s another issue here that is you’ve had an influx of users all of whom are arguing for essentially the same set of positions and not doing it very well with a bit of rudeness thrown in. One of the three is being particularly egregious, and I suspect that there may be some spill-over in attitude from that user’s behavior towards how people are voting about you. I will note that in the threads responding to the various Popperian criticisms, various LW regulars are willing to say when another LWian has said something they think is wrong. It might help to distinguish yourselves if you were willing to point out when you think the others are wrong. For example, you haven’t posted at all in this thread. Do you agree with everything he has said there? If you disagree will you say so or do you feel a need to stay silent to protect a fellow member of your tribal group?
For what it is worth, I’m not a Bayesian. I think that Bayesianism has deep problems especially surrounding 1) the difficulty of where priors come from 2) the difficulty of meaningfully making Bayesian estimates about abstract systems. I’ve voiced those concerns before here, and many of those comments have been voted up. Indeed, I recently started a subthread discussing a problem with the Solomonoff prior approach which has been voted up.
I agree with curi that the Conjunction Fallacy does not exist. But if I disagreed I would say so—Popperians don’t hold back from criticism of each other. If my criticism hit its mark, then curi would change his mind and I know that because I participate in Popperian forums that curi participates in. That said, most Popperians I know think along similar lines; I see more disagreement among Bayesians about their philosophy here.
Your thread is about a technical issue and I think Bayesians are more comfortable discussing these sort of things.
I agree with curi that the Conjunction Fallacy does not exist.
He’s not doing a very good job making that case. Do you think you can do a better job?
Also, let’s go through some of his other claims in that thread. I’m curious which you agree with:
Do you agree with the rest of what he has to say in that thread when he claims that “bad pseudo-scientific research designed to prove that people are biased idiots”? Do you agree with him that there is a deliberate “agenda” within the cognitive bias research “which has a low opinion of humans” which is “treating humans like dirt, like idiots”?
Do you agree with his claim that the conjunction fallacy is a claim about all thought about conjunctions and not some conjunctions?
Do you agree with his claim that “”Probability estimate” is a technical term which we can’t expect people to know? Do you agree with his implicit claim that this should apply even to highly educated people who work as foreign policy experts?
Do you agree with the rest of what he has to say in that thread when he claims that “bad pseudo-scientific research designed to prove that people are biased idiots. Do you agree with him that there is a deliberate “agenda” within the cognitive bias research “which has a low opinion of humans” which is “treating humans like dirt, like idiots”?
I don’t know if there is a deliberate agenda and I wouldn’t have stated things so baldly (and that might just be a hangup on my part). Let’s look at the Tversky and Kahneman paper that curi cited. The first sentence says:
Uncertainty is an unavoidable aspect of the human condition.
So in the very first sentence, the authors’ have revealed a low opinion of humans. They think humans have a condition, although they don’t explain what it is, only that uncertainty is part of it.
Later in the paper, they say:
Our studies of inductive reasoning have focused on systematic errors because they are diagnostic of the heuristics that generally govern inference and judgement.
So inference and judgement are governed by heuristics, genetic in origin (though this is just implied and the authors do nothing to address it). It’s not that humans come up with explanations and solve problems, it’s not that we are universal knowledge creators, it’s that we use heuristics handed down to us from our genes and we must be alerted to biases in them in order to correct them, otherwise we make systematic errors. So, again, a low opinion of humans. And we don’t do induction—as Popper and others such as Deutsch have explained, induction is impossible, it’s not a way we reason.
curi noted the authors also say:
Our problems, of course, were constructed to elicit conjunction errors, and they do not provide an unbiased estimate of the prevalence of these errors.
So they admit bias.
Do you agree with his claim that the conjunction fallacy is a claim about all thought about conjunctions and not some conjunctions?
Yes. It is based on inductivist assumptions about how people think, as the quote above illustrates. They disregard the importance of explanations and they think humans do probabilistic reasoning using in-born heuristics and that these are universal.
Do you agree with his claim that “”Probability estimate” is a technical term which we can’t expect people to know? Do you agree with his implicit claim that this should apply even to highly educated people who work as foreign policy experts?
Do you think foreign policy experts use probabilities rather than explanations?
Uncertainty is an unavoidable aspect of the human condition.
So in the very first sentence, the authors’ have revealed a low opinion of humans. They think humans have a condition, although they don’t explain what it is, only that uncertainty is part of it.
Um, I think you are possibly taking a poetic remark too seriously. If they had said “uncertainty is part of everyday life” would you have objected?
So inference and judgement are governed by heuristics, genetic in origin (though this is just implied and the authors do nothing to address it).
Heuristics are not necessarily genetic. They can be learned. I see nothing in their paper that implies that they were genetic, and having read a fair amount of what both T & K wrote, there’s no indication that I saw that they strongly thought that any of these heuristics were genetic.
It’s not that humans come up with explanations and solve problems, it’s not that we are universal knowledge creators, it’s that we use heuristics handed down to us from our genes and we must be alerted to biases in them in order to correct them, otherwise we make systematic errors. So, again, a low opinion of humans. And we don’t do induction—as Popper and others such as Deutsch have explained, induction is impossible, it’s not a way we reason.
Ok. This confuses me. Let’s says that humans use genetic heuristics, how is that a low opinion? Moreover, how does that prevent us from being universal knowledge creators? You also seem to be confused about whether or not something is a good epistemology being related to whether or not a given entity uses it. Whether humans use induction and whether induction is a good epistemological approach are distinct questions.
This seems close to, if anything, Christian apologists saying how if humans don’t have souls then everything is meaningless. Do you see the connection here? Just because humans have flaws doesn’t make humans terrible things. We’ve split the atom. We’ve gone to the Moon. We understand the subtle behavior of the prime numbers. We can look back billions of years in time to the birth of the universe. How does thinking we have flaws mean one has a low opinion of humans?
I’m curious, when a psychologist finds a new form of optical illusion, do you discount it in the same way? Does caring about that or looking for those constitute a low opinion of humans?
Our problems, of course, were constructed to elicit conjunction errors, and they do not provide an unbiased estimate of the prevalence of these errors.
So they admit bias
That’s a tortured reading of the sentence. The point is that they wanted to see if humans engaged in conjunction errors. So they constructed situations where, if humans were using the representativeness heuristic or similar systems the errors would be likely to show up. This is, from the perspective of Popper in LScD, a good experimental protocol, since if it didn’t happen, it would be a serious blow to the idea that humans use a representativeness heuristic to estimate likelyhood. They aren’t admitting “bias”- their point is that since their experimental constructions were designed to maximize the opportunity for a representativeness heuristic to show up, they aren’t a good estimate for how likely these errors are to occur in the wild.
Yes. It is based on inductivist assumptions about how people think, as the quote above illustrates. They disregard the importance of explanations and they think humans do probabilistic reasoning using in-born heuristics and that these are universal.
So it seems to me that you are essentially saying that you disagree with their experimental evidence on philosophical grounds. If your evidence disagrees with your philosophy the solution is not to deny the evidence.
Do you agree with his claim that “”Probability estimate” is a technical term which we can’t expect people to know? Do you agree with his implicit claim that this should apply even to highly educated people who work as foreign policy experts?
Do you think foreign policy experts use probabilities rather than explanations?
In some contexts, yes. For example, foreign policy experts working with economists or financial institutions sometimes will make probability estimates for them to work with. But let’s say they never do. How is that at all relevant to the questions at hand? Do you really think that the idea of estimating a probability is so strange and technical that highly educated individuals shouldn’t be expected to be able to understand what is being asked of them? And yet you think that Tversky had a low opinion of humans? Moreover, even if they did have trouble understanding what was meant, do you expect that would cause all the apparent bias to go by sheer coincidence just as one would expect given the conjunction fallacy?
You can read “human condition” as a poetic remark, but choosing a phrase such as that to open a scientific paper is imprecise and vague and that they chose this phrase reveals something of the authors’ bias I think.
No, Tversky and Kahneman have not specifically said here whether the heuristics in question are genetic or not. Don’t you think that’s odd? They’re just saying we do reasoning using heuristics, but not explaining anything. Yet explanations are important; from these everything else follows.
That they think the heuristics are genetic is an inference and googling around I see that researchers in this field talk about “evolved mental behaviour” so I think the inference is correct. It means that some ideas we hold can’t be changed, only worked around, and that these ideas are part of us even though we did not voluntarily take them onboard. So we involuntarily hold unchangeable ideas that we may or may not agree with and that may be false. It’s leading towards the idea we are not autonomous agents in the world, not fully human. The idea that we are universal knowledge creators means that all our our ideas can be changed and improved on. If there are flaws in our ideas, we discard them once the flaws are discovered.
With regard to induction, epistemology tells us that it is impossible, therefore no creature can use it. Yes, I disagree with the experimental evidence on philosophical grounds; the philosophy is saying the evidence is wrong, that the researchers made mistakes. curi has given some theories about the mistakes the researchers made, so it does indeed seem as though the evidence is wrong.
I have no problem with the idea that probabilities help solve problems. Probabilities arise as predictions of theories, so are important. But probability has nothing to do with the uncertainty of theories, which can’t be quantified, and no role in epistemology whatsoever. It’s taking an objective physical concept and applying it in a domain it doesn’t belong. I could go on, but you mention LSD, so I presume you know some of these ideas right? Have you read Conjectures and Refutations or Deutsch?
Well said. And btw about “human condition” at first I thought you might be overreacting to the phrase, from your previous comments here, but I found your email very convincing and I think you have it right. I think “poetic remark” is a terrible excuse—it’s merely a generic denial that they meant what they said. With the implicit claim that: this is unrepresentative, and they were right the rest of the time. The apologist doesn’t argue this claim, or even state it plainly; it’s just the subtext.
How you explain how their work pushes in the direction of denying we’re fully human, via attacking our autonomy (and free will, I’d add) is nice.
One thing I disagree with is the presumption that an LScD reader would know what you mean. You’re so much more advanced than just the content of LScD. You can’t expect someone to fill in the blanks just from that.
It’s not an agenda in the sense of a political agenda (though it does have some connections to political ideas), nor a conspiracy, nor a consciously intended and promoted agenda.
But, they have a bunch of unconscious ideas—a particular worldview—which informs how they approach their research and, because they do not use the rigor of science which prevents such things, their worldview/agenda biases all their results.
The proper rigor of science includes things like describing the experimental procedure in your paper so mistakes can be criticized and it can be repeated without introducing unintended changes, and having a “sources of error” section where you discuss all the ways your research might be wrong. When you leave out standard parts of science like those, and other more subtle ones, you get unscientific results. The scientific method, as Feynman explained, is our knowledge about how not to fool ourselves (i.e. it prevents our conclusions from being based on our biases). When you don’t use it, you get wrong, useless and biased results by default.
One of the ways these paper goes wrong is it doesn’t pay enough attention to the correct interpretation of the data. Even if the data was not itself biased—which they openly admit it is—their interpretation would be A) problematic and B) not argued for by the data itself (interpretations of data never are argued for by the data itself, but must be considered as a separate and philosophical issue!)
If you try enough, you can get people to make mistakes. I agree with that much. But what mistake are the people making? That’s not obvious, but the authors don’t seriously discuss the matter. For example, how much of the mistake people are making is due to miscommunication—that they read the question they are asked as having a meaning a bit different than the literal meaning the researchers consider the one true meaning? The possibility that the entire phenomenon they were observing, or part of it, is an aspect of communication not biases about probability is simply not addressed. Many other issues of interpretation of the results aren’t addressed either.
They simply interpret the experimental data in a way in line with their biases and unconscious agendas, and then claim that empirical science has supported their conclusions.
It’s not an agenda in the sense of a political agenda (though it does have some connections to political ideas), nor a conspiracy, nor a consciously intended and promoted agenda.
But, they have a bunch of unconscious ideas—a particular worldview—which informs how they approach their research
Yes, I agree, and the ideas are not all unconscious either. What do you think the worldview is? I’m guessing the worldview has ideas in it like animals create knowledge, but not so much as people, and that nature (genes) influence human thought leading to biases that are difficult to overcome and to special learning periods in childhood. It’s a worldview that denies people their autonomy isn’t it? I guess most researchers looking at this stuff would be politically left, be unaware of good philosophy, and have never paid close attention to issues like coercion.
I think they would sympathize with Haldane’s “queerer than we can suppose” line (quoted in BoI) and the principle of mediocrity (in BoI).
There’s something subtle but very wrong with their worldview that has to do with the difference between problem finding and problem solving. These people are not bubbling with solutions.
A lot of what they are doing is excusing faults. Explaining faults without blaming human choices. Taking away our responsibility and our ability to be responsible. They like to talk about humans being influenced—powerless and controlled—but small and subtle things. This connects with the dominant opinion on Less Wrong that morality does not exist.
They have low standards. They know their “science” is biased, but it’s good enough for them anyway. They don’t expect, and strive for, better. They think people are inherently parochial—including themselves, who they consider only a little less so—and they don’t mind.
Morality can’t exist without explanations, btw, and higher level concepts. Strong empiricism and instrumentalism—as dominate Less Wrong—destroy it pretty directly.
They would not like Ayn Rand. And they would not like Deutsch.
Together, we explored the psychology of intuitive beliefs and choices and examined their bounded rationality.
They take for granted rationality is bounded and then sought out ways to show it, e.g. by asking people to use their intuition and then comparing that intuition against math—a dirty trick, with a result easily predictable in advance, in line with the conclusion they assumed in advance. Rationality is bounded—they knew that since college merely by examining their own failings—and they’re just researching where the bounds are.
Why did the jump to universality occur in our Western society and not elsewhere? Deutsch rejects the explanations of Karl Marx, Friedrich Engels and Jared Diamond that the dominance of the West is a consequence of geography and climate
That’s another aspect of it. It’s the same kind of thing. If you establish how biased we are, then our success or failure is dependent not on us—human ideas and human choices—but parochial details like our environment and whether it happens to be one our biases will thrive in or not.
There a variety of issues going on here. Manfred pointed out many of them. There’s another issue here that is you’ve had an influx of users all of whom are arguing for essentially the same set of positions and not doing it very well with a bit of rudeness thrown in. One of the three is being particularly egregious, and I suspect that there may be some spill-over in attitude from that user’s behavior towards how people are voting about you. I will note that in the threads responding to the various Popperian criticisms, various LW regulars are willing to say when another LWian has said something they think is wrong. It might help to distinguish yourselves if you were willing to point out when you think the others are wrong. For example, you haven’t posted at all in this thread. Do you agree with everything he has said there? If you disagree will you say so or do you feel a need to stay silent to protect a fellow member of your tribal group?
For what it is worth, I’m not a Bayesian. I think that Bayesianism has deep problems especially surrounding 1) the difficulty of where priors come from 2) the difficulty of meaningfully making Bayesian estimates about abstract systems. I’ve voiced those concerns before here, and many of those comments have been voted up. Indeed, I recently started a subthread discussing a problem with the Solomonoff prior approach which has been voted up.
I agree with curi that the Conjunction Fallacy does not exist. But if I disagreed I would say so—Popperians don’t hold back from criticism of each other. If my criticism hit its mark, then curi would change his mind and I know that because I participate in Popperian forums that curi participates in. That said, most Popperians I know think along similar lines; I see more disagreement among Bayesians about their philosophy here.
Your thread is about a technical issue and I think Bayesians are more comfortable discussing these sort of things.
He’s not doing a very good job making that case. Do you think you can do a better job?
Also, let’s go through some of his other claims in that thread. I’m curious which you agree with: Do you agree with the rest of what he has to say in that thread when he claims that “bad pseudo-scientific research designed to prove that people are biased idiots”? Do you agree with him that there is a deliberate “agenda” within the cognitive bias research “which has a low opinion of humans” which is “treating humans like dirt, like idiots”?
Do you agree with his claim that the conjunction fallacy is a claim about all thought about conjunctions and not some conjunctions?
Do you agree with his claim that “”Probability estimate” is a technical term which we can’t expect people to know? Do you agree with his implicit claim that this should apply even to highly educated people who work as foreign policy experts?
I don’t know if there is a deliberate agenda and I wouldn’t have stated things so baldly (and that might just be a hangup on my part). Let’s look at the Tversky and Kahneman paper that curi cited. The first sentence says:
So in the very first sentence, the authors’ have revealed a low opinion of humans. They think humans have a condition, although they don’t explain what it is, only that uncertainty is part of it.
Later in the paper, they say:
So inference and judgement are governed by heuristics, genetic in origin (though this is just implied and the authors do nothing to address it). It’s not that humans come up with explanations and solve problems, it’s not that we are universal knowledge creators, it’s that we use heuristics handed down to us from our genes and we must be alerted to biases in them in order to correct them, otherwise we make systematic errors. So, again, a low opinion of humans. And we don’t do induction—as Popper and others such as Deutsch have explained, induction is impossible, it’s not a way we reason.
curi noted the authors also say:
So they admit bias.
Yes. It is based on inductivist assumptions about how people think, as the quote above illustrates. They disregard the importance of explanations and they think humans do probabilistic reasoning using in-born heuristics and that these are universal.
Do you think foreign policy experts use probabilities rather than explanations?
Um, I think you are possibly taking a poetic remark too seriously. If they had said “uncertainty is part of everyday life” would you have objected?
Heuristics are not necessarily genetic. They can be learned. I see nothing in their paper that implies that they were genetic, and having read a fair amount of what both T & K wrote, there’s no indication that I saw that they strongly thought that any of these heuristics were genetic.
Ok. This confuses me. Let’s says that humans use genetic heuristics, how is that a low opinion? Moreover, how does that prevent us from being universal knowledge creators? You also seem to be confused about whether or not something is a good epistemology being related to whether or not a given entity uses it. Whether humans use induction and whether induction is a good epistemological approach are distinct questions.
This seems close to, if anything, Christian apologists saying how if humans don’t have souls then everything is meaningless. Do you see the connection here? Just because humans have flaws doesn’t make humans terrible things. We’ve split the atom. We’ve gone to the Moon. We understand the subtle behavior of the prime numbers. We can look back billions of years in time to the birth of the universe. How does thinking we have flaws mean one has a low opinion of humans?
I’m curious, when a psychologist finds a new form of optical illusion, do you discount it in the same way? Does caring about that or looking for those constitute a low opinion of humans?
That’s a tortured reading of the sentence. The point is that they wanted to see if humans engaged in conjunction errors. So they constructed situations where, if humans were using the representativeness heuristic or similar systems the errors would be likely to show up. This is, from the perspective of Popper in LScD, a good experimental protocol, since if it didn’t happen, it would be a serious blow to the idea that humans use a representativeness heuristic to estimate likelyhood. They aren’t admitting “bias”- their point is that since their experimental constructions were designed to maximize the opportunity for a representativeness heuristic to show up, they aren’t a good estimate for how likely these errors are to occur in the wild.
So it seems to me that you are essentially saying that you disagree with their experimental evidence on philosophical grounds. If your evidence disagrees with your philosophy the solution is not to deny the evidence.
In some contexts, yes. For example, foreign policy experts working with economists or financial institutions sometimes will make probability estimates for them to work with. But let’s say they never do. How is that at all relevant to the questions at hand? Do you really think that the idea of estimating a probability is so strange and technical that highly educated individuals shouldn’t be expected to be able to understand what is being asked of them? And yet you think that Tversky had a low opinion of humans? Moreover, even if they did have trouble understanding what was meant, do you expect that would cause all the apparent bias to go by sheer coincidence just as one would expect given the conjunction fallacy?
You can read “human condition” as a poetic remark, but choosing a phrase such as that to open a scientific paper is imprecise and vague and that they chose this phrase reveals something of the authors’ bias I think.
No, Tversky and Kahneman have not specifically said here whether the heuristics in question are genetic or not. Don’t you think that’s odd? They’re just saying we do reasoning using heuristics, but not explaining anything. Yet explanations are important; from these everything else follows.
That they think the heuristics are genetic is an inference and googling around I see that researchers in this field talk about “evolved mental behaviour” so I think the inference is correct. It means that some ideas we hold can’t be changed, only worked around, and that these ideas are part of us even though we did not voluntarily take them onboard. So we involuntarily hold unchangeable ideas that we may or may not agree with and that may be false. It’s leading towards the idea we are not autonomous agents in the world, not fully human. The idea that we are universal knowledge creators means that all our our ideas can be changed and improved on. If there are flaws in our ideas, we discard them once the flaws are discovered.
With regard to induction, epistemology tells us that it is impossible, therefore no creature can use it. Yes, I disagree with the experimental evidence on philosophical grounds; the philosophy is saying the evidence is wrong, that the researchers made mistakes. curi has given some theories about the mistakes the researchers made, so it does indeed seem as though the evidence is wrong.
I have no problem with the idea that probabilities help solve problems. Probabilities arise as predictions of theories, so are important. But probability has nothing to do with the uncertainty of theories, which can’t be quantified, and no role in epistemology whatsoever. It’s taking an objective physical concept and applying it in a domain it doesn’t belong. I could go on, but you mention LSD, so I presume you know some of these ideas right? Have you read Conjectures and Refutations or Deutsch?
Well said. And btw about “human condition” at first I thought you might be overreacting to the phrase, from your previous comments here, but I found your email very convincing and I think you have it right. I think “poetic remark” is a terrible excuse—it’s merely a generic denial that they meant what they said. With the implicit claim that: this is unrepresentative, and they were right the rest of the time. The apologist doesn’t argue this claim, or even state it plainly; it’s just the subtext.
How you explain how their work pushes in the direction of denying we’re fully human, via attacking our autonomy (and free will, I’d add) is nice.
One thing I disagree with is the presumption that an LScD reader would know what you mean. You’re so much more advanced than just the content of LScD. You can’t expect someone to fill in the blanks just from that.
It’s not an agenda in the sense of a political agenda (though it does have some connections to political ideas), nor a conspiracy, nor a consciously intended and promoted agenda.
But, they have a bunch of unconscious ideas—a particular worldview—which informs how they approach their research and, because they do not use the rigor of science which prevents such things, their worldview/agenda biases all their results.
The proper rigor of science includes things like describing the experimental procedure in your paper so mistakes can be criticized and it can be repeated without introducing unintended changes, and having a “sources of error” section where you discuss all the ways your research might be wrong. When you leave out standard parts of science like those, and other more subtle ones, you get unscientific results. The scientific method, as Feynman explained, is our knowledge about how not to fool ourselves (i.e. it prevents our conclusions from being based on our biases). When you don’t use it, you get wrong, useless and biased results by default.
One of the ways these paper goes wrong is it doesn’t pay enough attention to the correct interpretation of the data. Even if the data was not itself biased—which they openly admit it is—their interpretation would be A) problematic and B) not argued for by the data itself (interpretations of data never are argued for by the data itself, but must be considered as a separate and philosophical issue!)
If you try enough, you can get people to make mistakes. I agree with that much. But what mistake are the people making? That’s not obvious, but the authors don’t seriously discuss the matter. For example, how much of the mistake people are making is due to miscommunication—that they read the question they are asked as having a meaning a bit different than the literal meaning the researchers consider the one true meaning? The possibility that the entire phenomenon they were observing, or part of it, is an aspect of communication not biases about probability is simply not addressed. Many other issues of interpretation of the results aren’t addressed either.
They simply interpret the experimental data in a way in line with their biases and unconscious agendas, and then claim that empirical science has supported their conclusions.
Yes, I agree, and the ideas are not all unconscious either. What do you think the worldview is? I’m guessing the worldview has ideas in it like animals create knowledge, but not so much as people, and that nature (genes) influence human thought leading to biases that are difficult to overcome and to special learning periods in childhood. It’s a worldview that denies people their autonomy isn’t it? I guess most researchers looking at this stuff would be politically left, be unaware of good philosophy, and have never paid close attention to issues like coercion.
Yes.
I think they would sympathize with Haldane’s “queerer than we can suppose” line (quoted in BoI) and the principle of mediocrity (in BoI).
There’s something subtle but very wrong with their worldview that has to do with the difference between problem finding and problem solving. These people are not bubbling with solutions.
A lot of what they are doing is excusing faults. Explaining faults without blaming human choices. Taking away our responsibility and our ability to be responsible. They like to talk about humans being influenced—powerless and controlled—but small and subtle things. This connects with the dominant opinion on Less Wrong that morality does not exist.
They have low standards. They know their “science” is biased, but it’s good enough for them anyway. They don’t expect, and strive for, better. They think people are inherently parochial—including themselves, who they consider only a little less so—and they don’t mind.
Morality can’t exist without explanations, btw, and higher level concepts. Strong empiricism and instrumentalism—as dominate Less Wrong—destroy it pretty directly.
They would not like Ayn Rand. And they would not like Deutsch.
http://nobelprize.org/nobel_prizes/economics/laureates/2002/kahnemann-lecture.pdf
They take for granted rationality is bounded and then sought out ways to show it, e.g. by asking people to use their intuition and then comparing that intuition against math—a dirty trick, with a result easily predictable in advance, in line with the conclusion they assumed in advance. Rationality is bounded—they knew that since college merely by examining their own failings—and they’re just researching where the bounds are.
EDIT
http://www.timeshighereducation.co.uk/story.asp?sectioncode=26&storycode=415636
That’s another aspect of it. It’s the same kind of thing. If you establish how biased we are, then our success or failure is dependent not on us—human ideas and human choices—but parochial details like our environment and whether it happens to be one our biases will thrive in or not.