BLUF: The cited paper doesn’t support the claim that we change our minds less often than we think, and overall it and a paper it cites point the other way. A better claim is that we change our minds less often than we should.
It is noteworthy that there are situations in which people exhibit overconfidence even in predicting their own behavior (Vallone, Griffin, Lin, & Ross, 1990). The key variable, therefore, is not the target of prediction (self versus other) but rather the relation between the strength and the weight of the available evidence.
Occam’s Razor says that our mainline prior should be that self-predictions behave like other predictions. These are old papers and include a small number of small studies, so probably they don’t shift beliefs all that much. However much you weigh them, I think they weigh in favor of Occam’s Razor.
In Vallone 1990, 92 Students were asked to prediction their future actions later in the academic year, and those of their roommate. An example prediction, will you go to the beach? The greater time between prediction and result makes this a more challenging self-prediction. Students were 78.7% confident and 69.1% accurate for self-prediction, compared to 77.4% confident and 66.3% accurate for other-prediction. Perhaps evidence for “we change our minds more often than we think”.
I think more striking is that both self and other predictions had a similar 10% overconfidence. They also had similar patterns of overconfidence—the overconfidence was clearest when it went against the base rate, and students underweighted the base rate when making both self-predictions and other-predictions.
As well as Occam’s Razor, self-predictions are inescapably also predicting other future events. Consider the job offer case study. Will one of the employers increase the compensation during negotiation? What will they find out when they research the job locations? What advice will they receive from their friends and family? Conversely, many other-predictions are entangled with self-predictions. It’s hard to conceive how we could be underconfident in self-prediction, overconfident in other-prediction, and not notice when the two biases clash.
Short-term self-predictions are easier
In Griffin 1992, the first test of “self vs other” calibration is study 4. This is a set of cooperate/defect tasks where the 24 players predict their future actions and their partner’s future actions. They were 84% confident and 81% accurate in self-prediction but 83% confident and 68% accurate in other-prediction. So they were well-calibrated for self-prediction, and over-confident for other-prediction. Perhaps evidence for “we change our minds as often as we think”.
But self-prediction in this game is much, much easier than other-prediction. 81% accuracy is surprisingly low—I guess that players were choosing a non-deterministic strategy (eg, defect 20% of the time) or were choosing to defect based in part on seeing their partner. But I have a much better idea of whether I am going to cooperate or defect in a game like that, because I know myself a little, and I know other people less.
The next study in Griffin 1992 is a deliberate test of the impacts of difficulty on calibration, where they find:
A comparison of Figs. 6 and 7 reveals that our simple chance model reproduces the pattern of results observed by Lichtenstein & Fischhoff (1977): slight underconfidence for very easy items, consistent overconfidence for difficult items, and dramatic overconfidence for “impossible” items.
Self-predictions are not self-recall
If someone says “we change our minds less often than we think”, they could mean one or more of:
We change our minds less often than we predict that we will
We change our minds less often that we model that we do
We change our minds less often that we recall that we did
If an agent has a bad self-model, it will make bad self-predictions (unless its mistakes cancel out). If an agent has bad self-recall it will build a bad self-model (unless it builds its self-model iteratively). But if an agent makes bad self-predictions, we can’t say anything about its self-model or self-recall, because all the bugs can be in its prediction engine.
Instead, Trapped Priors
This post precedes the excellent advice to Hold Off on Proposing Solutions. But the correct basis for that advice is not that “we change our minds less often than we think”. Rather, what we need to solve is that we change our minds less often than we should.
In Trapped Priors as a basic problem of rationality, Scott Alexander explains one model for how we can become stuck with inaccurate beliefs and find it difficult to change our beliefs. In these examples, the person with the trapped prior also believes that they are unlikely to change their beliefs.
The person who has a phobia of dogs believes that they will continue to be scared of dogs.
The Republican who thinks Democrats can’t be trusted believes that they will continue to distrust Democrats.
The opponent of capital punishment believes that they will continue to oppose capital punishment.
Reflections
I took this post on faith when I first read it, and found it useful. Then I realized that, just from the quote, the claimed study doesn’t support the post, people considering two job offers are not “within half a second of hearing the question”. It was that confusion that pushed me to download the paper. I was surprised to find the Vallone citation that led me to draw the opposite conclusion. I’m not quite sure what happened in October 2007 (and “on August 1st, 2003, at around 3 o’clock in the afternoon”). Still, the sequence continues to stand with one word changed from “think” to “should”.
BLUF: The cited paper doesn’t support the claim that we change our minds less often than we think, and overall it and a paper it cites point the other way. A better claim is that we change our minds less often than we should.
The cited paper is freely downloadable: The weighing of evidence and the determinants of confidence. Here is the sentence immediately following the quote:
The citation is to Vallone, R. P., Griffin, D. W., Lin, S., & Ross, L. (1990). Overconfident Prediction of Future Actions and Outcomes by Self and Others. Journal of Personality and Social Psychology, 58, 582-592.
Self-predictions are predictions
Occam’s Razor says that our mainline prior should be that self-predictions behave like other predictions. These are old papers and include a small number of small studies, so probably they don’t shift beliefs all that much. However much you weigh them, I think they weigh in favor of Occam’s Razor.
In Vallone 1990, 92 Students were asked to prediction their future actions later in the academic year, and those of their roommate. An example prediction, will you go to the beach? The greater time between prediction and result makes this a more challenging self-prediction. Students were 78.7% confident and 69.1% accurate for self-prediction, compared to 77.4% confident and 66.3% accurate for other-prediction. Perhaps evidence for “we change our minds more often than we think”.
I think more striking is that both self and other predictions had a similar 10% overconfidence. They also had similar patterns of overconfidence—the overconfidence was clearest when it went against the base rate, and students underweighted the base rate when making both self-predictions and other-predictions.
As well as Occam’s Razor, self-predictions are inescapably also predicting other future events. Consider the job offer case study. Will one of the employers increase the compensation during negotiation? What will they find out when they research the job locations? What advice will they receive from their friends and family? Conversely, many other-predictions are entangled with self-predictions. It’s hard to conceive how we could be underconfident in self-prediction, overconfident in other-prediction, and not notice when the two biases clash.
Short-term self-predictions are easier
In Griffin 1992, the first test of “self vs other” calibration is study 4. This is a set of cooperate/defect tasks where the 24 players predict their future actions and their partner’s future actions. They were 84% confident and 81% accurate in self-prediction but 83% confident and 68% accurate in other-prediction. So they were well-calibrated for self-prediction, and over-confident for other-prediction. Perhaps evidence for “we change our minds as often as we think”.
But self-prediction in this game is much, much easier than other-prediction. 81% accuracy is surprisingly low—I guess that players were choosing a non-deterministic strategy (eg, defect 20% of the time) or were choosing to defect based in part on seeing their partner. But I have a much better idea of whether I am going to cooperate or defect in a game like that, because I know myself a little, and I know other people less.
The next study in Griffin 1992 is a deliberate test of the impacts of difficulty on calibration, where they find:
Self-predictions are not self-recall
If someone says “we change our minds less often than we think”, they could mean one or more of:
We change our minds less often than we predict that we will
We change our minds less often that we model that we do
We change our minds less often that we recall that we did
If an agent has a bad self-model, it will make bad self-predictions (unless its mistakes cancel out). If an agent has bad self-recall it will build a bad self-model (unless it builds its self-model iteratively). But if an agent makes bad self-predictions, we can’t say anything about its self-model or self-recall, because all the bugs can be in its prediction engine.
Instead, Trapped Priors
This post precedes the excellent advice to Hold Off on Proposing Solutions. But the correct basis for that advice is not that “we change our minds less often than we think”. Rather, what we need to solve is that we change our minds less often than we should.
In Trapped Priors as a basic problem of rationality, Scott Alexander explains one model for how we can become stuck with inaccurate beliefs and find it difficult to change our beliefs. In these examples, the person with the trapped prior also believes that they are unlikely to change their beliefs.
The person who has a phobia of dogs believes that they will continue to be scared of dogs.
The Republican who thinks Democrats can’t be trusted believes that they will continue to distrust Democrats.
The opponent of capital punishment believes that they will continue to oppose capital punishment.
Reflections
I took this post on faith when I first read it, and found it useful. Then I realized that, just from the quote, the claimed study doesn’t support the post, people considering two job offers are not “within half a second of hearing the question”. It was that confusion that pushed me to download the paper. I was surprised to find the Vallone citation that led me to draw the opposite conclusion. I’m not quite sure what happened in October 2007 (and “on August 1st, 2003, at around 3 o’clock in the afternoon”). Still, the sequence continues to stand with one word changed from “think” to “should”.