(This point is not central to the post, but I am making this comment here because it’s the most recent mention of this concept that I’m aware of.)
In a classic study on scope neglect, people were willing to spend about as much to save 2,000 birds as to save 200,000 birds.
It’s worth noting the way in which this characterization of the study in question[1] (reference) is incomplete, and subtly misleading as a result.
Three different sets of people were each asked how much they would pay to save, respectively, 2,000, 20,000, or 200,000 birds. None of the participants were asked more than one of those three questions!
Now, you might reasonably object: the dollar values given in answer to each question, on average, were almost the same. There is no reason to believe that one of the respondents to the 2,000-bird question would’ve given any different a response (in expectation) to the 200,000-bird question, had that been the single question they’d been asked (instead of the one they were asked); most likely they’d have answered in the same way that the actual 200,000-bird-condition respondents in fact answered.
Fair enough. Consider, however: this experimental result (such as it is) is usually cited as evidence as a cognitive bias, which is termed “scope insensitivity”. Setting aside the question of whether that cognitive-causal explanation for the survey responses is correct, let’s instead ask: for this to qualify as evidence of a bias, we must claim that the respondents’ answers were somehow wrong. Now, there are two problems with this:
First, suppose we claim that some or all of the respondents answered “incorrectly” or “irrationally”. Very well, but the question then becomes: what is the “right” answer? How would a respondent to the survey have needed to answer, in order to evade the charge of scope insensitivity? What dollar value, given as an answer to the question, would serve as evidence that a respondent were thinking rationally, and not falling prey to the bias? Remember—each respondent was only asked one question!
It seems rather silly to say that there is some right answer to (any version of) the survey question. How much you’re willing to pay is a matter of values and preferences, surely. But if there’s no right answer, then it seems questionable whether we can claim that the respondents answered wrongly; and if so, then how can we accuse them of bias?
Well, suppose we now enter the realm of somewhat reasonable speculation, and, on the evidence from the cross-subject data, conjecture that one of the respondents, if asked all three questions, would nevertheless still give a set of three values that were, if not quite as close to each other as the averages of the answers given by the three single-question respondent groups, nevertheless much closer to each other than anything like “a tenfold difference from each condition to the next”. Again, this is conjecture, and directly supported by nothing in this study; but perhaps it is reasonable conjecture.
In this case, we can say “If a person would pay $80 to save 2,000 birds, but would not pay $800 to save 20,000 birds and $8,000 to save 200,000 birds, then they are making a mistake; there is no particular amount which is the ‘right’ answer to any one of these three questions, but together the answers to all three may be jointly rational or jointly irrational”.
Now, right away we might note that there is no particular reason to expect that our decision-theoretic utility of these three outcomes should have a linear relationship, much less to expect that the value to us (however construed) of saving 20,000 birds should in fact be 10 times greater than the value to us of saving 2,000 birds. Might it not be true that the study respondents demonstrated a quite rational scope sensitivity to utility, which happened not to scale in the naively obvious way with number of birds saved? It seems to me that this could easily be true… but we need not involve such fundamental problems, because there’s a much more immediate problem with the reasoning given in the previous paragraph:
If rationality requires us to value saving 20,000 birds at 10 times what we value the saving of 2,000 birds, then the natural question is—must this scale indefinitely? Suppose we’re willing to pay $80 to save 2,000 birds; if we must, on pain of irrationality, be willing to pay $800 to save 20,000 birds, and $8,000 to save 200,000 birds, must we therefore also be willing to pay $80,000 to save 2,000,000 birds, $800,000 to save 20,000,000 birds, and $8,000,000 to save 200,000,000 birds?
This is obviously absurd, but then how to construe the “correct” reasoning here? Do we ask “how much would we pay to save all the birds” (of which there are, it seems, between 50 and 430 billion), and then work backwards from there (concluding, inevitably, that we would quite literally refuse to pay a dime for saving 200,000 birds)? Or what?
By the way, it is interesting to note that this study—which, let us recall, asked participants “how much they would pay to save … migrating birds from drowning in uncovered oil ponds”—was commissioned and funded by Exxon Corporation, in the wake of the 1989 Exxon Valdez oil spill. William H. Desvousges, the lead author, writes: “The key issue in designing the studies is that they were being prepared in anticipation of litigation. While Exxon did not exert influence on the choice of study topics nor how the studies were conducted, we were acutely aware that the study findings might be used in court at some future date. It would be necessary to explain the issues tested in simple terms to a judge who had no knowledge of the nonmarket-valuation literature or to a jury who would have very limited knowledge of economics. … Our proposal, which later formed the basis for the “birds study,” was stimulated by an article that Rick [i.e., coauthor Richard W. Dunford] had seen in the News and Observer that indicated more birds had died by landing in waste oil holding ponds in the Central Flyway than had been killed in the Exxon Valdez oil spill.”
(This point is not central to the post, but I am making this comment here because it’s the most recent mention of this concept that I’m aware of.)
It’s worth noting the way in which this characterization of the study in question[1] (reference) is incomplete, and subtly misleading as a result.
Three different sets of people were each asked how much they would pay to save, respectively, 2,000, 20,000, or 200,000 birds. None of the participants were asked more than one of those three questions!
Now, you might reasonably object: the dollar values given in answer to each question, on average, were almost the same. There is no reason to believe that one of the respondents to the 2,000-bird question would’ve given any different a response (in expectation) to the 200,000-bird question, had that been the single question they’d been asked (instead of the one they were asked); most likely they’d have answered in the same way that the actual 200,000-bird-condition respondents in fact answered.
Fair enough. Consider, however: this experimental result (such as it is) is usually cited as evidence as a cognitive bias, which is termed “scope insensitivity”. Setting aside the question of whether that cognitive-causal explanation for the survey responses is correct, let’s instead ask: for this to qualify as evidence of a bias, we must claim that the respondents’ answers were somehow wrong. Now, there are two problems with this:
First, suppose we claim that some or all of the respondents answered “incorrectly” or “irrationally”. Very well, but the question then becomes: what is the “right” answer? How would a respondent to the survey have needed to answer, in order to evade the charge of scope insensitivity? What dollar value, given as an answer to the question, would serve as evidence that a respondent were thinking rationally, and not falling prey to the bias? Remember—each respondent was only asked one question!
It seems rather silly to say that there is some right answer to (any version of) the survey question. How much you’re willing to pay is a matter of values and preferences, surely. But if there’s no right answer, then it seems questionable whether we can claim that the respondents answered wrongly; and if so, then how can we accuse them of bias?
Well, suppose we now enter the realm of somewhat reasonable speculation, and, on the evidence from the cross-subject data, conjecture that one of the respondents, if asked all three questions, would nevertheless still give a set of three values that were, if not quite as close to each other as the averages of the answers given by the three single-question respondent groups, nevertheless much closer to each other than anything like “a tenfold difference from each condition to the next”. Again, this is conjecture, and directly supported by nothing in this study; but perhaps it is reasonable conjecture.
In this case, we can say “If a person would pay $80 to save 2,000 birds, but would not pay $800 to save 20,000 birds and $8,000 to save 200,000 birds, then they are making a mistake; there is no particular amount which is the ‘right’ answer to any one of these three questions, but together the answers to all three may be jointly rational or jointly irrational”.
Now, right away we might note that there is no particular reason to expect that our decision-theoretic utility of these three outcomes should have a linear relationship, much less to expect that the value to us (however construed) of saving 20,000 birds should in fact be 10 times greater than the value to us of saving 2,000 birds. Might it not be true that the study respondents demonstrated a quite rational scope sensitivity to utility, which happened not to scale in the naively obvious way with number of birds saved? It seems to me that this could easily be true… but we need not involve such fundamental problems, because there’s a much more immediate problem with the reasoning given in the previous paragraph:
If rationality requires us to value saving 20,000 birds at 10 times what we value the saving of 2,000 birds, then the natural question is—must this scale indefinitely? Suppose we’re willing to pay $80 to save 2,000 birds; if we must, on pain of irrationality, be willing to pay $800 to save 20,000 birds, and $8,000 to save 200,000 birds, must we therefore also be willing to pay $80,000 to save 2,000,000 birds, $800,000 to save 20,000,000 birds, and $8,000,000 to save 200,000,000 birds?
This is obviously absurd, but then how to construe the “correct” reasoning here? Do we ask “how much would we pay to save all the birds” (of which there are, it seems, between 50 and 430 billion), and then work backwards from there (concluding, inevitably, that we would quite literally refuse to pay a dime for saving 200,000 birds)? Or what?
By the way, it is interesting to note that this study—which, let us recall, asked participants “how much they would pay to save … migrating birds from drowning in uncovered oil ponds”—was commissioned and funded by Exxon Corporation, in the wake of the 1989 Exxon Valdez oil spill. William H. Desvousges, the lead author, writes: “The key issue in designing the studies is that they were being prepared in anticipation of litigation. While Exxon did not exert influence on the choice of study topics nor how the studies were conducted, we were acutely aware that the study findings might be used in court at some future date. It would be necessary to explain the issues tested in simple terms to a judge who had no knowledge of the nonmarket-valuation literature or to a jury who would have very limited knowledge of economics. … Our proposal, which later formed the basis for the “birds study,” was stimulated by an article that Rick [i.e., coauthor Richard W. Dunford] had seen in the News and Observer that indicated more birds had died by landing in waste oil holding ponds in the Central Flyway than had been killed in the Exxon Valdez oil spill.”