The last post in this series suggested that we make up goals and preference for other people as we go along, but ended with the suggestion that we do the same for ourselves. This deserves some evidence.
One of the most famous sets of investigations into this issue was Nisbett and Wilson’s Verbal Reports on Mental Processes, the discovery of which I owe to another Less Wronger even though I can’t remember who. The abstract says it all:
When people attempt to report on their cognitive processes, that is, on the processes mediating the effects of a stimulus on a response, they do not do so on the basis of any true introspection. Instead, their reports are based on a priori, implicit casual theories, or judgments about the extent to which a particular stimulus is a plausible cause of a given response. This suggests that though people may not be able to observe directly their cognitive processes, they will sometimes be able to report accurately about them. Accurate reports will occur when influential stimuli are salient and are plausible causes of the responses they produce, and will not occur when stimuli are not salient or are not plausible causes.
In short, people guess, and sometimes they get lucky. But where’s the evidence?
Nisbett & Schachter, 1966. People were asked to get electric shocks to see how much shock they could stand (I myself would have waited to see if one of those see-how-much-free-candy-you’ll-eat studies from the post last week was still open). Half the subjects were also given a placebo pill which they were told would cause heart palpitations, tremors, and breathing irregularities—the main problems people report when they get shocked. The hypothesis: people who took the pill would attribute much of the unpleasantness of the shock to the pill instead, and so tolerate more shock. This occurred right on schedule: people who took the pill tolerated four times as strong a shock as controls. When asked why they did so well, the twelve subjects in the experimental group came up with fabricated reasons; one example given was “I played with radios as a child, so I’m used to electricity.” Only three of twelve subjects made a connection between the pill and their shock tolerance; when the researchers revealed the deception and their hypothesis, most subjects said it was an interesting idea and probably explained the other subjects, but it hadn’t affected them personally.
Zimbardo et al, 1965. Participants in this experiment were probably pleased to learn there were no electric shocks involved, right up until the point where the researchers told them they had to eat bugs. In one condition, a friendly and polite researcher made the request; in another, a surly and arrogant researcher asked. Everyone ate the bug (experimenters can be pretty convincing), but only the group accosted by the unpleasant researcher claimed to have liked it. This confirmed the team’s hypothesis: the nice-researcher group would know why they ate the bug—to please their new best friend—but the mean-researcher group would either have to admit it was because they’re pushovers, or explain it by saying they liked eating bugs. When asked after the experiment why they were so willing to eat the bug, they said things like “Oh, it’s just one bug, it’s no big deal.” When presented with the idea of cognitive dissonance, they once again agreed it was an interesting idea that probably affected some of the other subjects but of course not them.
Maier, 1931. Subjects were placed in a room with several interesting tools and asked to come up with as many solutions as possible to a puzzle about tying two cords together. One end of each cord was tied to the ceiling, and when the subject was holding on to one cord they couldn’t reach the other. A few solutions were obvious, such as tying an extension cord to each, but the experiment involved a more complicated solution—tying a weight to a cord and using it as a pendulum to bring it into reach of the other. Subjects were generally unable to come up with this idea on their own in any reasonable amount of time, but when the experimenter, supposedly in the process of observing the subject, “accidentally” brushed up against one cord and set it swinging, most subjects were able to develop the solution within 45 seconds. However, when the experimenter asked immediately afterwards how they came up with the pendulum idea, the subjects were completely unable to recognize the experimenter’s movement as the cue, and instead came up with completely unrelated ideas and invented thought processes, some rather complicated. After what the study calls “persistent probing”, less than a third of the subjects mentioned the role of the experimenter.
Latane & Darley, 1970. This is the famous “bystander effect”, where people are less likely to help when there are others present. The researchers asked subjects in bystander effect studies what factors influenced their decision not to help; the subjects gave many, but didn’t mention the presence of other people.
Nisbett & Wilson, 1977. Subjects were primed with lists of words all relating to an unlisted word (eg “ocean” and “moon” to elicit “tide”), and then asked the name of a question, one possible answer to which involved the unlisted word (eg “What’s your favorite detergent?” “Tide!”). The experimenters confirmed that many more people who had been primed with the lists gave the unlisted answer than control subjects (eg more people who had memorized “ocean” and “moon” gave Tide as their favorite detergent). Then they asked subjects why they had chosen their answer, and the subjects generally gave totally unrelated responses (eg “I love the color of the Tide box” or “My mother uses Tide”). When the experiment was explained to subjects, only a third admitted that the words might have affected their answer; the rest kept insisting that Tide was really their favorite. Then they repeated the process with several other words and questions, continuing to ask if the word lists influenced answer choice. The subjects’ answers were effectively random—sometimes they believed the words didn’t affect them when statistically they probably did, other times they believed the words did affect them when statistically they probably didn’t.
Nisbett & Wilson, 1977. Subjects in a department store were asked to evaluate different articles of clothing in a line. As usually happens in this sort of task, people disproportionately chose the rightmost object (four times as often as the leftmost), no matter which object was on the right; this is technically referred to as a “position effect”. The customers were asked to justify their choices and were happy to do so based on different qualities of the fabric et cetera; none said their choice had anything to do with position, and the experimenters dryly mention that when they asked the subjects if this was a possibility, “virtually all subjects denied it, usually with a worried glance at the interviewer suggesting they felt that they...were dealing with a madman”.
Nisbett & Wilson, 1977. Subjects watched a video of a teacher with a foreign accent. In one group, the video showed the teacher acting kindly toward his students; in the other, it showed the teacher being strict and unfair. Subjects were asked to rate how much they liked the teacher, and also how much they liked his appearance and accent, which were the same across both groups. Because of the halo effect, students who saw the teacher acting nice thought he was attractive with a charming accent; people who saw the teacher acting mean thought he was ugly with a harsh accent. Then subjects were asked whether how much they liked the teacher had affected how much they liked the appearance and accent. They generally denied any halo effect, and in fact often insisted that part of the reason they hated the teacher so much was his awful clothes and annoying accent—the same clothes and accent which the nice-teacher group said were part of the reason they liked him so much!
There are about twice as many studies listed in the review article itself, but the trend is probably getting pretty clear. In some studies, like the bug-eating experiment, people perform behaviors and, when asked why they performed the behavior, guess wrong. Their true reasons for the behavior are unclear to them. In others, like the clothes position study, people make a choice, and when asked what preferences caused the choice, guess wrong. Again, their true reasons are unclear to them.
Nisbett and Wilson add that when they ask people to predict how they would react to the situations in their experiments, people “make predictions that in every case were similar to the erroneous reports given by the actual subjects.” In the bystander effect experiment, outsiders predict the presence or absence of others wouldn’t affect their ability to help, and subjects claim (wrongly) that the presence or absence of others didn’t affect their ability to help.
In fact, it goes further than this. In the word-priming study (remember? The one with Tide detergent?) Nisbett and Wilson asked outsiders to predict which sets of words would change answers to which questions (would hearing “ocean” and “moon” make you pick Tide as your favorite detergent? Would hearing “Thanksgiving” make you pick Turkey as a vacation destination?). The outsiders’ guesses correlated not at all with which words genuinely changed answers, but very much with which words the subjects guessed had changed their answers. Perhaps the subjects’ answers looked a lot like the outsiders’ answers because both were engaged in the same process: guessing blindly.
These studies suggest that people do not have introspective awareness to the processes that generate their behavior. They guess their preferences, justifications, and beliefs by inferring the most plausible rationale for their observed behavior, but are unable to make these guesses qualitatively better than outside observers. This supports the view presented in the last few posts: that mental processes are the results of opaque preferences, and that our own “introspected” goals and preferences are a product of the same machinery that infers goals and preferences in others in order to predict their behavior.
The limits of introspection
Related to: Inferring Our Desires
The last post in this series suggested that we make up goals and preference for other people as we go along, but ended with the suggestion that we do the same for ourselves. This deserves some evidence.
One of the most famous sets of investigations into this issue was Nisbett and Wilson’s Verbal Reports on Mental Processes, the discovery of which I owe to another Less Wronger even though I can’t remember who. The abstract says it all:
In short, people guess, and sometimes they get lucky. But where’s the evidence?
Nisbett & Schachter, 1966. People were asked to get electric shocks to see how much shock they could stand (I myself would have waited to see if one of those see-how-much-free-candy-you’ll-eat studies from the post last week was still open). Half the subjects were also given a placebo pill which they were told would cause heart palpitations, tremors, and breathing irregularities—the main problems people report when they get shocked. The hypothesis: people who took the pill would attribute much of the unpleasantness of the shock to the pill instead, and so tolerate more shock. This occurred right on schedule: people who took the pill tolerated four times as strong a shock as controls. When asked why they did so well, the twelve subjects in the experimental group came up with fabricated reasons; one example given was “I played with radios as a child, so I’m used to electricity.” Only three of twelve subjects made a connection between the pill and their shock tolerance; when the researchers revealed the deception and their hypothesis, most subjects said it was an interesting idea and probably explained the other subjects, but it hadn’t affected them personally.
Zimbardo et al, 1965. Participants in this experiment were probably pleased to learn there were no electric shocks involved, right up until the point where the researchers told them they had to eat bugs. In one condition, a friendly and polite researcher made the request; in another, a surly and arrogant researcher asked. Everyone ate the bug (experimenters can be pretty convincing), but only the group accosted by the unpleasant researcher claimed to have liked it. This confirmed the team’s hypothesis: the nice-researcher group would know why they ate the bug—to please their new best friend—but the mean-researcher group would either have to admit it was because they’re pushovers, or explain it by saying they liked eating bugs. When asked after the experiment why they were so willing to eat the bug, they said things like “Oh, it’s just one bug, it’s no big deal.” When presented with the idea of cognitive dissonance, they once again agreed it was an interesting idea that probably affected some of the other subjects but of course not them.
Maier, 1931. Subjects were placed in a room with several interesting tools and asked to come up with as many solutions as possible to a puzzle about tying two cords together. One end of each cord was tied to the ceiling, and when the subject was holding on to one cord they couldn’t reach the other. A few solutions were obvious, such as tying an extension cord to each, but the experiment involved a more complicated solution—tying a weight to a cord and using it as a pendulum to bring it into reach of the other. Subjects were generally unable to come up with this idea on their own in any reasonable amount of time, but when the experimenter, supposedly in the process of observing the subject, “accidentally” brushed up against one cord and set it swinging, most subjects were able to develop the solution within 45 seconds. However, when the experimenter asked immediately afterwards how they came up with the pendulum idea, the subjects were completely unable to recognize the experimenter’s movement as the cue, and instead came up with completely unrelated ideas and invented thought processes, some rather complicated. After what the study calls “persistent probing”, less than a third of the subjects mentioned the role of the experimenter.
Latane & Darley, 1970. This is the famous “bystander effect”, where people are less likely to help when there are others present. The researchers asked subjects in bystander effect studies what factors influenced their decision not to help; the subjects gave many, but didn’t mention the presence of other people.
Nisbett & Wilson, 1977. Subjects were primed with lists of words all relating to an unlisted word (eg “ocean” and “moon” to elicit “tide”), and then asked the name of a question, one possible answer to which involved the unlisted word (eg “What’s your favorite detergent?” “Tide!”). The experimenters confirmed that many more people who had been primed with the lists gave the unlisted answer than control subjects (eg more people who had memorized “ocean” and “moon” gave Tide as their favorite detergent). Then they asked subjects why they had chosen their answer, and the subjects generally gave totally unrelated responses (eg “I love the color of the Tide box” or “My mother uses Tide”). When the experiment was explained to subjects, only a third admitted that the words might have affected their answer; the rest kept insisting that Tide was really their favorite. Then they repeated the process with several other words and questions, continuing to ask if the word lists influenced answer choice. The subjects’ answers were effectively random—sometimes they believed the words didn’t affect them when statistically they probably did, other times they believed the words did affect them when statistically they probably didn’t.
Nisbett & Wilson, 1977. Subjects in a department store were asked to evaluate different articles of clothing in a line. As usually happens in this sort of task, people disproportionately chose the rightmost object (four times as often as the leftmost), no matter which object was on the right; this is technically referred to as a “position effect”. The customers were asked to justify their choices and were happy to do so based on different qualities of the fabric et cetera; none said their choice had anything to do with position, and the experimenters dryly mention that when they asked the subjects if this was a possibility, “virtually all subjects denied it, usually with a worried glance at the interviewer suggesting they felt that they...were dealing with a madman”.
Nisbett & Wilson, 1977. Subjects watched a video of a teacher with a foreign accent. In one group, the video showed the teacher acting kindly toward his students; in the other, it showed the teacher being strict and unfair. Subjects were asked to rate how much they liked the teacher, and also how much they liked his appearance and accent, which were the same across both groups. Because of the halo effect, students who saw the teacher acting nice thought he was attractive with a charming accent; people who saw the teacher acting mean thought he was ugly with a harsh accent. Then subjects were asked whether how much they liked the teacher had affected how much they liked the appearance and accent. They generally denied any halo effect, and in fact often insisted that part of the reason they hated the teacher so much was his awful clothes and annoying accent—the same clothes and accent which the nice-teacher group said were part of the reason they liked him so much!
There are about twice as many studies listed in the review article itself, but the trend is probably getting pretty clear. In some studies, like the bug-eating experiment, people perform behaviors and, when asked why they performed the behavior, guess wrong. Their true reasons for the behavior are unclear to them. In others, like the clothes position study, people make a choice, and when asked what preferences caused the choice, guess wrong. Again, their true reasons are unclear to them.
Nisbett and Wilson add that when they ask people to predict how they would react to the situations in their experiments, people “make predictions that in every case were similar to the erroneous reports given by the actual subjects.” In the bystander effect experiment, outsiders predict the presence or absence of others wouldn’t affect their ability to help, and subjects claim (wrongly) that the presence or absence of others didn’t affect their ability to help.
In fact, it goes further than this. In the word-priming study (remember? The one with Tide detergent?) Nisbett and Wilson asked outsiders to predict which sets of words would change answers to which questions (would hearing “ocean” and “moon” make you pick Tide as your favorite detergent? Would hearing “Thanksgiving” make you pick Turkey as a vacation destination?). The outsiders’ guesses correlated not at all with which words genuinely changed answers, but very much with which words the subjects guessed had changed their answers. Perhaps the subjects’ answers looked a lot like the outsiders’ answers because both were engaged in the same process: guessing blindly.
These studies suggest that people do not have introspective awareness to the processes that generate their behavior. They guess their preferences, justifications, and beliefs by inferring the most plausible rationale for their observed behavior, but are unable to make these guesses qualitatively better than outside observers. This supports the view presented in the last few posts: that mental processes are the results of opaque preferences, and that our own “introspected” goals and preferences are a product of the same machinery that infers goals and preferences in others in order to predict their behavior.