No I’m not. I’m talking about the same thing Nyan is talking about.
nyan_sandwich mislabeled their discussion, which appears to be the source of much of the controversy. If you want to talk about minimax, talk about minimax, don’t use another term that has an established meaning.
That is, risk aversion when it comes to actual utility—which is itself a general bias of humans.
The only general bias I’ve heard of that’s close to this is the certainty effect. If there’s another one I haven’t heard of, I would greatly appreciate hearing about it.
The only general bias I’ve heard of that’s close to this is the certainty effect. If there’s another one I haven’t heard of, I would greatly appreciate hearing about it.
I don’t think it’s all the certainty effect. The bias that people seem to have can usually be modeled by a nonlinear utility function, but isn’t it still there in cases where it’s understood that utility is linear (lives saved, charity dollars, etc)?
but isn’t it still there in cases where it’s understood that utility is linear (lives saved, charity dollars, etc)?
Why would those be linear? (i.e. who understands that?)
Utility functions are descriptive; they map from expected outcomes to actions. You measure them by determining what actions people take in particular situations.
Consider scope insensitivity. It doesn’t make sense if you measure utility as linear in the number of birds- aren’t 200,000 birds 100 times more valuable than 2,000 birds? It’s certainly 100 times more birds, but that doesn’t tell us anything about value. What it tells you is that the action “donate to save birds in response to prompt” provides $80 worth of utility, and the number of birds doesn’t look like an input to the function.
And while scope insensitivity reflects a pitfall in human cognition, it’s not clear it doesn’t serve goals. If the primary benefit for a college freshman to, say, opposing genocide in Darfur is that they signal their compassion, it doesn’t really matter what the scale of the genocide in Darfur is. Multiply or divide the number of victims by ten, and they’re still going to slap on a “save Darfur” t-shirt, get the positive reaction from that, and then move on with their lives.
Now, you may argue that your utility function should be linear with respect to some feature of reality- but that’s like saying your BMI should be 20. It is whatever it is, and will take effort to change. Whether or not it’s worth the effort is, again, a question of revealed preferences.
Given that the scope of the problem is so much larger than the influence that we usually have when making the calculations here the gradient at the margin is essentially linear.
(i.e. who understands that?)
Most people who have read Eliezer’s posts. He has made at least one on this subject.
Given that the scope of the problem is so much larger than the influence that we usually have when making the calculations here the gradient at the margin is essentially linear.
That’s exactly what I would say, in way fewer words. Well said.
nyan_sandwich mislabeled their discussion, which appears to be the source of much of the controversy. If you want to talk about minimax, talk about minimax, don’t use another term that has an established meaning.
In the specific case case of risk aversion he is using the term correctly and your substitution with the meaning behind “diminishing marginal utility” is not a helpful correction, it is an error. Minimax is again related but also not the correct word. (I speak because in Nyan’s situation I would be frustrated by being falsely corrected.)
In the specific case case of risk aversion he is using the term correctly
If you could provide examples of this sort of usage in the utility theory literature or textbooks, I will gladly retract my corrections. I don’t recall seeing “risk aversion” used this way before.
Minimax is again related but also not the correct word.
nyan_sandwich has edited their post to reflect that minimax was their intention.
If you could provide examples of this sort of usage in the utility theory literature or textbooks, I will gladly retract my corrections. I don’t recall seeing “risk aversion” used this way before.
It is just the standard usage if applied appropriately to utility. Even the ‘certainty effect’ that you mention is an example of being risk adverse with respect to utility albeit one highly limited to a specific subset of cases—again when the object being risk is evaluated in terms of utility.
nyan_sandwich has edited their post to reflect that minimax was their intention.
Which may apply somewhere in the post but in the specific application in the context just wouldn’t have made sense in the sentence.
nyan_sandwich mislabeled their discussion, which appears to be the source of much of the controversy. If you want to talk about minimax, talk about minimax, don’t use another term that has an established meaning.
The only general bias I’ve heard of that’s close to this is the certainty effect. If there’s another one I haven’t heard of, I would greatly appreciate hearing about it.
Sorry guys.
I don’t think it’s all the certainty effect. The bias that people seem to have can usually be modeled by a nonlinear utility function, but isn’t it still there in cases where it’s understood that utility is linear (lives saved, charity dollars, etc)?
Why would those be linear? (i.e. who understands that?)
Utility functions are descriptive; they map from expected outcomes to actions. You measure them by determining what actions people take in particular situations.
Consider scope insensitivity. It doesn’t make sense if you measure utility as linear in the number of birds- aren’t 200,000 birds 100 times more valuable than 2,000 birds? It’s certainly 100 times more birds, but that doesn’t tell us anything about value. What it tells you is that the action “donate to save birds in response to prompt” provides $80 worth of utility, and the number of birds doesn’t look like an input to the function.
And while scope insensitivity reflects a pitfall in human cognition, it’s not clear it doesn’t serve goals. If the primary benefit for a college freshman to, say, opposing genocide in Darfur is that they signal their compassion, it doesn’t really matter what the scale of the genocide in Darfur is. Multiply or divide the number of victims by ten, and they’re still going to slap on a “save Darfur” t-shirt, get the positive reaction from that, and then move on with their lives.
Now, you may argue that your utility function should be linear with respect to some feature of reality- but that’s like saying your BMI should be 20. It is whatever it is, and will take effort to change. Whether or not it’s worth the effort is, again, a question of revealed preferences.
Given that the scope of the problem is so much larger than the influence that we usually have when making the calculations here the gradient at the margin is essentially linear.
Most people who have read Eliezer’s posts. He has made at least one on this subject.
That’s exactly what I would say, in way fewer words. Well said.
In the specific case case of risk aversion he is using the term correctly and your substitution with the meaning behind “diminishing marginal utility” is not a helpful correction, it is an error. Minimax is again related but also not the correct word. (I speak because in Nyan’s situation I would be frustrated by being falsely corrected.)
If you could provide examples of this sort of usage in the utility theory literature or textbooks, I will gladly retract my corrections. I don’t recall seeing “risk aversion” used this way before.
nyan_sandwich has edited their post to reflect that minimax was their intention.
It is just the standard usage if applied appropriately to utility. Even the ‘certainty effect’ that you mention is an example of being risk adverse with respect to utility albeit one highly limited to a specific subset of cases—again when the object being risk is evaluated in terms of utility.
Which may apply somewhere in the post but in the specific application in the context just wouldn’t have made sense in the sentence.