Yes but one has to be very careful. For humans scope-insensitivity usually occurs at ranges where the goods are still fungible. In the studies that Eliezer presents in that post, the issue is slightly different; here there are so many copies of a good X that adding or removing, say, 1000 of them does not affect the value of a single copy of X.
For instance, there are probably billions of birds in existence; if we would pay $80 to save 2000 birds when there are 1,000,000,000 of them, then we would probably also pay $80 to save 2000 birds when there are 999,998,000 of them. Repeating this argument a few times would mean that we should be willing to pay $800 to save 20000 birds, as opposed to the still $80 reported in the survey.
(For this argument to work entirely, we have to also argue that $800 is a small portion of a person’s total wealth, which is true in most first world countries.)
If utility isn’t fungible for large quantities, does that mean that it is rational to be scope-insensitive?
Yes but one has to be very careful. For humans scope-insensitivity usually occurs at ranges where the goods are still fungible. In the studies that Eliezer presents in that post, the issue is slightly different; here there are so many copies of a good X that adding or removing, say, 1000 of them does not affect the value of a single copy of X.
For instance, there are probably billions of birds in existence; if we would pay $80 to save 2000 birds when there are 1,000,000,000 of them, then we would probably also pay $80 to save 2000 birds when there are 999,998,000 of them. Repeating this argument a few times would mean that we should be willing to pay $800 to save 20000 birds, as opposed to the still $80 reported in the survey.
(For this argument to work entirely, we have to also argue that $800 is a small portion of a person’s total wealth, which is true in most first world countries.)