Living bias, not thinking bias
1. Biases, those traits which affect everyone but me
I recently had the opportunity to run an exercise on bias and rationality with a group of (fellow) university students. I wasn’t sure it was going to go down well. There’s one response that always haunts me when it comes to introducing bias: That’s an interesting description of other people but it doesn’t describe me.
I can’t remember the details (and haven’t been able to track them down), but I once read about an experiment on some bias, let’s say it was hindsight bias. The research team carried out a standard experiments which showed that the participants were biased as expected. After, they told these participants about hindsight bias. Most of the participants thought this was interesting and probably explained the actions of other people in the experiment but they didn’t think it explained their own actions.
So going into the presentation, this is what I was worried about: People thinking these biases were just abstract and didn’t affect them.
Then at the end, everyone’s comments made it clear that this wasn’t the case. They really had realised that these were biases which affected them. The question then is, what led them to reach this conclusion?
2. Living history, living bias
All of the other planets (and the Earth) orbit the Sun. Once upon a time, we didn’t believe this: We thought that the these planets (and the Sun) orbited the Earth.
Imagine that you’re alive all that time ago, when the balance of evidence has just swung so that it favours the theory that the planets orbit the Sun. However, at the time, you steadfastly insist that they orbit the Earth. Why? Because your father told you it did when you were a child and you always believe things your father told you. Then a friend explains all of the evidence in favour of the theory that the planets orbit the Sun. Eventually, you realise that you were mistaken all along and, at the same time, you realise something else: You realise that it was a mistake not to question a belief just because your father endorsed it.
If you think about history, you learn what beliefs were wrong. If you live history, you learn this and then you also learn what it feels like to mistakenly endorse an incorrect belief. Maybe next time it occurs then, you can avoid making the same error.
In teaching people about biases, I think its best to help students to live biases and not just think about them. That way, they’ll know what it feels like to be biased and they’ll know that they are biased.
3. Rationality puzzles
One of the best ways to do this, and the technique I used in my presentation, seems to be to use of rationality puzzles. Basically, these are puzzles where the majority of respondents tend to reason in a biased or fallacious way. Run a few of these puzzles and most students will reason incorrectly in at least one of them. This means them a chance to experience being biased. If lessons focused on an abstract presentation biases instead, the student would think about the bias but not live it in the same way.
So on example rationality puzzle is the 2, 4, 6 task. When I ran this exercise for my presentation, I broke the group up into pairs and made one member of each pair the questioner and the other the respondent.
The respondent was given a slip of paper containing a number rule written upon it. This was a rule that a sequence of three numbers could either meet or fail to meet. I won’t mention what the rule was yet, to give those who haven’t come across the puzzle a chance to think about how they would proceed.
The questioner’s job was to guess this rule. They were given one clue: The sequence 2, 4, 6 met the rule. The questioner was then allowed to ask whether other three number sequences met the rule and the respondent would let them know if it did. The questioner could ask about as many sequences as they wanted to and when they were confident they were to write their guess down (I limited the exercise to five minutes for practical purposes and everyone had written down an answer by then).
The answer was: Any three numbers in ascending order. No students in the group got the right answer.
I then used the exercise to explain a bias called positive bias. First, I noted that only 21% of respondents reached the right answer to this scenario. Then I pointed out that the interesting point isn’t this figure but rather why so few people reach the right answer. Specifically, people think to test positive, rather than negative, cases. In other words, they’re more likely to test cases that their theory predicts will occur (in this case, those that get a yes answer) then cases that their theory predicts won’t. So if someone’s initial theory was that the rule was, “three numbers, each two higher than the previous one” then they might test “10, 12, 14“ as this is a positive case for their theory. On the other hand, they probably wouldn’t test “10, 14, 12” or “10, 13, 14” as these are negative cases for their prediction of the rule.
This demonstrates positive bias—the bias toward thinking to test positive, rather then negative, cases for their theory (see here for previous discussion of the 2, 4, 6 task on Less Wrong).
Puzzles like this allow the student to live the bias and not just consider it on an abstract level.
4. Conclusion
In teaching people about biases we should be trying to make them live biases, rather than just thinking about them. Rationality puzzles offer one of the best ways to achieve this.
Of course, for any individual puzzle, some people will get the right answer. With the 2, 4, 6 puzzle in particular, a number of us have found people perform better on this task in casual, rather than formal, settings. The best way to deal with this is to present a series of puzzles that reveal a variety of different biases. Most people will reach the wrong answer in at least one puzzle.
5. More rationality puzzles
Bill the accountant and the conjunction fallacy
World War II and Selection Effects (not quite a puzzle yet, but it feels like it could be made into one)
(The answer to “Is the Sun orbiting Earth?” is “No, the Earth is spinning, and the Sun stays put,” not “No, the Earth orbits the Sun.”)
Er, the sun orbits the center of mass of the solar system, doesn’t it?
EDIT: I just now realized the question about orbits is not “how do these bodies travel through space?” but “why does the sun move across the sky?”, and that makes everything make more sense.
Well, the center of mass of the solar system is close enough to the center of the sun that within the frame of reference of the solar system it can reasonably be said to be staying put.
Fixed, I hope.
I agree with what Jack said, in theory, but take your point in terms of the clarity of my writing.
Sort of. You still need to explain the Sun moving along the eliptic- Ptolemaic astronomy that is done by having the Sun rotate around the Earth just a little slower than the stellar sphere rotates around the Earth. On occasion there were even a few people who hypothesized the Earth rotating to explain diurnal motion but kept the planets orbiting around the Earth. But reasons involving the mixed up internal logic of Aristotelian physics having the Earth move at all weakened the impulse to keep the Earth at the center.
Here is an interesting related video.
I actually tried the 2-4-6 puzzle on both my brothers, and they both got it right because they thought there was some trick to it and so kept pressing until they were sure (and even after ~20 questions still didn’t fully endorse their answers). Maybe I have extra-non-biased brothers (not too likely), or maybe the clinical 2-4-6 test is so likely to be failed because students expect a puzzle and not a trick. That is to say, you are in a position of power over them and they trust you to give them something similar to what they’ve been given in the past. Also there’s an opportunity cost to keep guessing in the classroom setting, both because you have less class time to learn and because if other students have already stopped you might alienate them by continuing. Point is, I’ve seen markedly better results when this puzzle is administered in a casual or ‘real-world’ setting. I intend to try it on other people (friends, co-workers), and see if the trend continues. Anyone else tried it and gotten this result?
I’ve had the same result.
I tried it on friends and family and 3 of 5 got it right.
However, in this group no-one got it right.
I think the best solution is to run a number of puzzles in conjunction. Not everyone will fall for all of them but most people will fall for some of them. Maybe I should emphasise that point more.
Yes, that’s the result I normally get anecdotally, except the time I described the puzzle so badly the subjects didn’t know what to do.
Positive Bias is distinct from Confirmation Bias.
I did link to the discussion of this in the post. I chose to use the phrase “confirmation bias” because that is the phrase used in the literature (as far as I know) and I think it’s possible for the distinct language of a community to become too disconnected from standard usage.
However, given that this comment has a number of upvotes, I’ll probably change it. I can see the distinction and I can see why we would want seperate labels for distinct ideas.
I notice you have changed the article to use the name “positive bias”, but you are still describing it as if it were the confirmation bias, “people look for confirming rather than disconfirming evidence”. That is not what is going on. They don’t avoid testing the sequence 10,13,14 because it would disconfirm their theory, they don’t even think of testing it because their theory doesn’t say it is a magic sequence and they want to test magic sequences. They don’t even notice that their theory of what is a magic sequence is also implicitly a theory of what is not a magic sequence, and don’t realize they should also test that the theoretically non magic sequences are in fact not magic.
This seems like the clearest way to describe the case in English. The problem in the 2, 4, 6 problem is exactly that people test sequences that would confirm rather than disconfirm their theory. The problem is exactly that they look for confirming rather than disconfirming evidence.
The distinction Eliezer makes is between the tendency to seek what he calls positive and negative cases (positive bias) and the tendency to seek to preserve our original beliefs (confirmation bias).
By this light, I’m talking about positive bias. I don’t at any point say that the participant is seeking confirming rather than disconfirming evidence in order to preserve their beliefs. Instead, I’m simply noting a bias toward searching for a certain sort of test case.
But I think that those test cases are more clearly described as confirming or disconfirming cases rather than positive or negative cases. Everyone immediately knows what a confirming case is. On the other hand, I don’t think the word positive is immediately clear.
So I changed the name from confirmation bias to positive bias to individuate which of the two biases I’m talking about but I still think the bias is best explained in the language of confirmation and disconfirmation.
So I agree that they don’t avoid testing the sequence 10, 13, 14 because it would disconfirm their theory (and they want to maintain their original belief) but I do think that people don’t think of testing disconfirming sequences because such sequences are not magic sequences suggested by their theory.
It seems that we disagree on which language is clearest. My opinion: It’s only the prior debate over this topic on Less Wrong that makes the word confirming seem confusing. Most newcomers to the idea would find the word confirming more clear here. Old timers will understand these issues anyway so I want to explain clearly to newcomers.
Ideally, I’d prefer everyone to understand it, of course. Rather than keeping editing it, I’ll post my latest paragraph here and, if you have time, I’d be interested to know whether you think this works better. If so, I’ll edit when I’m back at the computer (heading out soon)
Outside view vs. inside view. From outside, it is obvious that people are imperfect and do a lot of stupid things. Luckily, none of this applies to me! :D
I have a similar experience when I talk with people about cults. First assumption is, only stupid person would join a cult, because joining cult is obviously stupid. (Problem is, it is stupid only from non-member’s point of view.) When I explain them that joining a cult is usually not a conscious well-informed decision, but a long road paved with deception and manipulation, some of them agree that perhaps an intelligent person can join the cult too… but it couldn’t happen to them, because there is no way they could fall prey to manipulation, ever. And so far I found no way to convince them otherwise. From outside, they seem just as vulnerable as anyone; but I guess most people can’t even imagine themselves from outside.
Let’s generalize it: Psychology—an interesting description of other people but it doesn’t describe me.
This seems to devolve to p-zombie solipsism: other people’s “thoughts” and “intentions” may be merely physical processes in their brains, but I am an authentic soul, the true author of my own thoughts and intentions, which are really real.
I think 2-4-6 task is good at illustrating superstition, it shows how “obvious” and fundamental can the wrong assumptions of a theory be, that even the most natural-seeming features must nonetheless be tested.
I once administered the 2-4-6 task to a friend. He made many guesses, and explained why he had made some of them when we were done. One particularly nice hypothesis he had tested was that the sum of the numbers had to be less than a fixed value. He didn’t solve the problem and, in fact, despite his search, he never found an example which the rule rejected before he gave up. My friend was looking for disconfirming evidence, he was just really bad at it. So bad he never through to try negative numbers, or numbers in non increasing order, or listing a smiley face instead of an integer to see whether the rule was defined outside the domain I had specified.
Of course I don’t deny the existence of positive bias based on one game with one person, but because of that experiment I’m curious: did you provide the group with a chance to share how well their performance was explained by positive bias or did you just...look for confirming evidence?
Perhaps he thought it was about sets of 3 numbers, not lists of 3 numbers, and always stated his sets in the obvious order? :P
Did I just look for confirming evidence? Well, I suppose the answer is neither yes nor no. I didn’t really consider the performance of this group to be evidence at all. I consider the original result to be evidence and I ran it with the group purely as a way of creating engagement with that evidence rather than with the intention that their results would count as further evidence.
In terms of the general point though—that your experience demonstrates that some people who fail the task do test negative sequences—it certainly seems plausible that some of the people who get it wrong would fall into this category.
In Wason’s original version of the task, when someone guessed the wrong answer, they were able to continue to make guesses until they got it right. In this case, 22 out of 29 participants made an incorrect guess before they made a correct guess. It seems like if they were testing negative cases, this shouldn’t have been likely to occur (as if they tested a few negative cases, this probably would have revealed that they were wrong).
Right, I was thinking more in the sense of you trying to give them evidence of the bias through the task. Sorry, that did come out wrong.
They guessed the teacher’s password
I didn’t push this line explicitly so they weren’t guessing the teacher’s password in that sense, nor did I ask whether they thought biases affected them.
Instead, of their own volition, at the end they volunteered comments like, “I find that I’m most driven to learn when I realise I’ve been doing something wrong all along”. People didn’t make these sorts of comments in responses to other presentations given in the same environment.
In fact, they were so fascinated by the results that people continued to discuss them as they left the class and that definitely wasn’t the case with other presentations.
I’m not denying you could be right but I’d love to know why you think that.
Ah, that sounds much better.
Mostly because of
It felt likely that due to your concern about “other people, not me” it would come across as the “right answer” in your presentation. But it’s quite possible that’s not the case, from your extended description.
On these lines, what was your relationship to this group? You mention that you’re a student, but if you’re a TA, then their behavior is likely to be different than if you’re another member of a club.
Fellow student. Though I’m not sure what conclusion to draw from that.
Why do you think so?
The Sun does orbit the Earth, but that fact is not particularly relevant to why it appears to rise every day.
If you want to bring up the debate regarding heliocentric versus geocentric views of the universe, remember the debate was really about what was the center of the universe; that is, whether everything else went around the Sun or the Earth. (the answer, of course, is ‘both’ for any reasonable definition of ‘around’). The question was settled by which interpretation made the math easier / more beautiful.
Also remember that Galileo exaggerated the debate significantly.
ETA: And, of course, we now don’t consider the Earth or the Sun the center of the universe.
Excuse my language, but there is no way in hell the author of that post has read Kuhn on the very subject he claims to be refuting Kuhn.
I’ve read Kuhn and I didn’t see a problem with it. Care to elaborate?
Like, just to begin with nearly all the facts he brings up are things Kuhn says. At no point does Kuhn give the reader the impression that there were only ever two competing astronomical models. What he says is that the Ptolemaic-Aristotelian model broke down starting with the Copernican suggestion of a Heliocentric universe. The range of theories the blogger lists fit perfectly in the Kuhnian model as stepping stones on the way to Keplerian-Newtonian astronomy. The blogger is right, for example that the Tychonic model was a powerful predictor, very popular and geocentric. But Brahe’s model is, according to Kuhn, nonetheless part of the transition away from Ptolemaic astronomy because 1) It places the Sun as the center of orbits of Mercury, Venus, Jupiter, Mars and Saturn and (therefore) demolishes the notion of crystalline spheres moving the planets (because such spheres would crash into each other and because he observed comets passing through them). Mathematically successful Brahe may have been, but his model leaves out all of the Aristotelian artifacts that made the Ptolemaic model so compelling. This is exactly what we expect in the midst of a paradigm shift—attempts to incorporate the new evidence into the old paradigm but simultaneously weakening the coherence of the old paradigm. The other geocentric + Earth spinning models the blogger mentions fit fine as well.
Like… what?! Kuhn spends half a chapter on Kepler. Quoting Kuhn now:
The blog author’s insistence on treating the seven theories he lists as equal and unrelated contestants just isn’t responsive to what Kuhn is talking about. That there was for 200 years more than two competing models is not at all reason to think there were more than two competing paradigms. There is no good reason to not call Kepler a Copernican even though he no long agreed with the techniques of Copernicus (100 years and Brahe’s measurements intervening). Kepler called himself a Copernican. And similarly there is no good reason not to see Brahe as trying to defend aspects of the old paradigm. His model is mathematically equivalent to the Copernican model, it consists of ret-coning the Ptolemaic model to make it fit observation for religious and philosophical reason. The blog doesn’t pay any attention to the extra-astronomical aspects of the revolution- these are the features that make up the paradigm. Any theory that ruins the Aristotelian cosmology is another chink in the old paradigm’s armor.
I could go on- but in short every other sentence in that post reads to me like a non-sequitur. Cartesians persisted in arguing for their model some fifty years after Newton died? So what? That’s neither here nor there.
Thanks! That was very thorough.
Thanks. I think this should be fixed now.
The author of that post needs to learn how to use punctuation.
Nitpick: When you mentioned the 2-4-6 experiment, I didn’t know what it was, so I clicked on the link and read about it. That was unnecessary, because you immediately explained it … but, somehow, the wording of the narrative didn’t signal that the explanation was coming.
Could be just me.
Fixed (I hope). Thanks.
Surely you can find out the details before writing a top-level post.
I did try. I didn’t just include such non-specific details out of laziness. I made a concerted effort to track down the example and failed. If anyone else has heard of this case, I’d love to know the details so I could put them in.
I am surprised. I run the test once in a circle of my friends in a pub and five out of six got the right answer. (I am pretty sure nobody except me knew that it is a standard psychological experiment.)
Apologies for the nitpick, but in section 1, you repeatedly use ‘effect’ where you should use ‘affect’. It’s very distracting.
Fixed, thanks.
Thanks. You’re still missing one in the bold heading for section 1.
I’m not convinced biases are real in either me or anybody else in realistic situations, as opposed to highly artificial laboratory setting.
Here are two very extreme “biases” of perception—they’re shocking the first time you see them, and even if you know about them you can do absolutely nothing to reduce their power:
auditory illusion
optical illusion
In laboratory setting we can use such “biases” to manipulate people, and make them fail tests. But these same “biases” actually help more often than hinder in real life.
Is there any serious evidence that “biases” are significantly harmful on average in nonartificial settings? This is the big unspoken assumption, but evidence is lacking.
Costs of irrationality.
What the? Are you serious? People gamble on lotteries and smoke. Young males pay more for insurance than other groups.
Our entire civilization is ‘artificial’ from the perspective of our genetic heritage. Of course some of our biases are going to be a hindrance in everyday life.
None of that sounds to me like what was requested in the grandparent.
Sure, theoretically, biases are worse than perfect rationality. No problem there.
But in practice, is having a bunch of biases directing many of our actions significantly harmful on average, as compared to some other method of bounded rationality? I don’t think I’ve seen a study on this.
You either didn’t read the (relative to this) grandparent or you have an unhealthily bizarre definition of significantly harmful.
No, I read the grandparent, and I doubt I have such a definition.
Yes, people smoke cigarettes, and let’s assume for the sake of argument that it counts as “significantly harmful”. Now imagine (hypothetically) that the same mechanism of thought also causes them to pursue lifestyles that grant them an overall 20% increase in healthy lifespan as compared to nonsmokers. In that scenario, the bias that causes smoking cigarettes is not “significantly harmful on average”.
Now consider another hypothetical where people smoke cigarettes due to their biases, and other people without those biases have a significantly higher incidence of being run over by buses. Then, the biases that cause smoking cigarettes are not “significantly harmful on average” as compared to the alternative.
I see you assuming exactly what taw claims we’re assuming. I don’t see you citing any empirical studies showing that it is the case.
If anything like this were the case, I expect insurance companies would have picked up on it by now. It’s possible to imagine a situation where the same bias balances out to not being harmful, but we have enough evidence to strongly suspect that doesn’t describe the world we live in.
I cited the known and well justified behaviour of insurance companies. Compared to that most ‘empirical studies’ are barely more than slightly larger than average anecdotes.
Yes, I could assume that any obvious failure mode of our biases not serving us well in the present day environment is actually balanced out by some deep underlying benefit of that bias that still applies now and that I haven’t thought of yet. But that would be an error somewhere between privileging the hypothesis and outright faith in the anthropomorphised benevolence of our genetic heritage.
Edit: DVNM (Down Vote (of parent as of present time) Not Me!)
If that were true, then the cognitive defect would be the inability to distinguish between the problems of choosing whether to smoke cigarettes and how to avoid being run over by buses. Both the “biased” and “unbiased” people are somehow throwing away enough information about at least one of these problems to make them seem isomorphic in such a way that the effective strategy in one corresponds to the ineffective strategy in the other. The underlying problem behind the bias is throwing away the information.
I can imagine a universe where Omega goes around and gives a hundred dollars to each person who is susceptible to some bias. This doesn’t mean that this example has any connection to the real world in arguing that the bias is somehow a good thing.
In addition to what others said (hypothetical examples are irrelevant for actual reality), it is not clear to what you compare the biases. What does “the same thought mechanism mean” in this case? Thought mechanisms don’t have clear boundaries. If some pattern of thought is beneficial in some situations and harmful in others, we are free to call “bias” only its harmful applications.
The USSR relations + Poland invasion study shows that cognitive biases can substantially impact even experts in a field when making decisions. There are studies which show that doctors engage in the actual base rate fallacy.
Can you provide some specific real examples of this?
Many heuristics are sloppy, which only matters in edge cases, but much cheaper in standard use cases. Using them instead of thinking through things fully saves time and energy, though it’s sometimes wrong.
I don’t know if that’s usefully described as a “bias”, though, provided that the person understands the tradeoffs of what they’re doing. Properly used, heuristics (and other forms of estimation) lose precision, but not accuracy.