The only general bias I’ve heard of that’s close to this is the certainty effect. If there’s another one I haven’t heard of, I would greatly appreciate hearing about it.
I don’t think it’s all the certainty effect. The bias that people seem to have can usually be modeled by a nonlinear utility function, but isn’t it still there in cases where it’s understood that utility is linear (lives saved, charity dollars, etc)?
but isn’t it still there in cases where it’s understood that utility is linear (lives saved, charity dollars, etc)?
Why would those be linear? (i.e. who understands that?)
Utility functions are descriptive; they map from expected outcomes to actions. You measure them by determining what actions people take in particular situations.
Consider scope insensitivity. It doesn’t make sense if you measure utility as linear in the number of birds- aren’t 200,000 birds 100 times more valuable than 2,000 birds? It’s certainly 100 times more birds, but that doesn’t tell us anything about value. What it tells you is that the action “donate to save birds in response to prompt” provides $80 worth of utility, and the number of birds doesn’t look like an input to the function.
And while scope insensitivity reflects a pitfall in human cognition, it’s not clear it doesn’t serve goals. If the primary benefit for a college freshman to, say, opposing genocide in Darfur is that they signal their compassion, it doesn’t really matter what the scale of the genocide in Darfur is. Multiply or divide the number of victims by ten, and they’re still going to slap on a “save Darfur” t-shirt, get the positive reaction from that, and then move on with their lives.
Now, you may argue that your utility function should be linear with respect to some feature of reality- but that’s like saying your BMI should be 20. It is whatever it is, and will take effort to change. Whether or not it’s worth the effort is, again, a question of revealed preferences.
Given that the scope of the problem is so much larger than the influence that we usually have when making the calculations here the gradient at the margin is essentially linear.
(i.e. who understands that?)
Most people who have read Eliezer’s posts. He has made at least one on this subject.
Given that the scope of the problem is so much larger than the influence that we usually have when making the calculations here the gradient at the margin is essentially linear.
That’s exactly what I would say, in way fewer words. Well said.
Sorry guys.
I don’t think it’s all the certainty effect. The bias that people seem to have can usually be modeled by a nonlinear utility function, but isn’t it still there in cases where it’s understood that utility is linear (lives saved, charity dollars, etc)?
Why would those be linear? (i.e. who understands that?)
Utility functions are descriptive; they map from expected outcomes to actions. You measure them by determining what actions people take in particular situations.
Consider scope insensitivity. It doesn’t make sense if you measure utility as linear in the number of birds- aren’t 200,000 birds 100 times more valuable than 2,000 birds? It’s certainly 100 times more birds, but that doesn’t tell us anything about value. What it tells you is that the action “donate to save birds in response to prompt” provides $80 worth of utility, and the number of birds doesn’t look like an input to the function.
And while scope insensitivity reflects a pitfall in human cognition, it’s not clear it doesn’t serve goals. If the primary benefit for a college freshman to, say, opposing genocide in Darfur is that they signal their compassion, it doesn’t really matter what the scale of the genocide in Darfur is. Multiply or divide the number of victims by ten, and they’re still going to slap on a “save Darfur” t-shirt, get the positive reaction from that, and then move on with their lives.
Now, you may argue that your utility function should be linear with respect to some feature of reality- but that’s like saying your BMI should be 20. It is whatever it is, and will take effort to change. Whether or not it’s worth the effort is, again, a question of revealed preferences.
Given that the scope of the problem is so much larger than the influence that we usually have when making the calculations here the gradient at the margin is essentially linear.
Most people who have read Eliezer’s posts. He has made at least one on this subject.
That’s exactly what I would say, in way fewer words. Well said.