What does “the utility function isn’t up for grabs” mean? I think Eliezer originated that phrase, but he apparently also believes that we can be and should be persuaded by (some) moral arguments. Aren’t these two positions contradictory?
(It seems like a valid or at least coherent, and potentially persuasive, argument that unbounded utility functions lead to absurd decisions.)
A notion can be constant and yet we can learn about it.
For example: “The set of all prime numbers” is clearly unchanged by our reasoning, and yet we learn about it (whether it is finite, for example).
Kripke used (for a different purpose) the morning star and the evening star. The concepts are discovered to be the same concept (from scientific evidence).
The argument that unbounded utility functions lead to absurdity is also persuasive.
That seems to be a reasonable interpretation, but if we do interpret “the utility function isn’t up for grabs” that way, as a factual claim that each person has a utility function that can be discovered but not changed by moral arguments and reasoning, then I think it’s far from clear that the claim is true.
There could be other interpretations that may or may not be more plausible, and I’m curious what Eliezer’s own intended meaning is, as well as what pengvado meant by it.
There is a sense in which anything that makes choices does have a utility function—the utility function revealed by their choices. In this sense, for example, that akrasia doesn’t exist. People prefer to procrastinate, as revealed by their choice to procrastinate.
People frequently slip back and forth between this sense of “utility function” (a rather strange description of their behavior, whatever that is) and the utilitarian philosophers’ notions of “utility”, which have something to do with happiness/pleasure/fun. To the extent that people pursue happiness, pleasure, and fun, the two senses overlap. However, in my experience, people frequently make themselves miserable or make choices according to lawful rules (of morality, say) - without internal experiences of pleasure in following those rules.
And it’s worse than just akrasia. If you have incoherent preferences and someone money-pumps you, then the revealed utility function is “likes running around in circles”, i.e. it isn’t even about the choices you thought you were deciding between.
I agree that if you can derive from my preferences a conclusion which is judged absurd by my current preferences, that’s grounds to change my preferences. Though unless it’s a preference reversal, such a derivation usually rests on both the preferences and the decision algorithm. In this case, as long as you’re evaluating expected utility, a 1/bignum probability of +biggernum utilons is just a good deal. Afaict the nontrivial question is how to apply the thought experiment to the real world, where I don’t have perfect knowledge or well calibrated probabilities, and want my mistakes to not be catastrophic. And the answer to that might be a decision algorithm that doesn’t look exactly like expected utility maximization, but whose analogue of the utility function is still unbounded. Not that I have any more precise suggestions.
What if you aren’t balancing tiny probabilities, and Omega just gives you 80% chance of 10^^3 years and asks you if you want to pay a penny to switch to 80% chance of 10^^4 ? Assuming both of those are so far into the diminishing returns end of your bounded utility function that you see a negligible (< 20% of a penny) difference between them, that seems to me like an absurd conclusion in the other direction. Just giving up an unbounded reward is a mistake too.
What does “the utility function isn’t up for grabs” mean? I think Eliezer originated that phrase, but he apparently also believes that we can be and should be persuaded by (some) moral arguments. Aren’t these two positions contradictory?
(It seems like a valid or at least coherent, and potentially persuasive, argument that unbounded utility functions lead to absurd decisions.)
A notion can be constant and yet we can learn about it.
For example: “The set of all prime numbers” is clearly unchanged by our reasoning, and yet we learn about it (whether it is finite, for example). Kripke used (for a different purpose) the morning star and the evening star. The concepts are discovered to be the same concept (from scientific evidence).
The argument that unbounded utility functions lead to absurdity is also persuasive.
That seems to be a reasonable interpretation, but if we do interpret “the utility function isn’t up for grabs” that way, as a factual claim that each person has a utility function that can be discovered but not changed by moral arguments and reasoning, then I think it’s far from clear that the claim is true.
There could be other interpretations that may or may not be more plausible, and I’m curious what Eliezer’s own intended meaning is, as well as what pengvado meant by it.
There is a sense in which anything that makes choices does have a utility function—the utility function revealed by their choices. In this sense, for example, that akrasia doesn’t exist. People prefer to procrastinate, as revealed by their choice to procrastinate.
People frequently slip back and forth between this sense of “utility function” (a rather strange description of their behavior, whatever that is) and the utilitarian philosophers’ notions of “utility”, which have something to do with happiness/pleasure/fun. To the extent that people pursue happiness, pleasure, and fun, the two senses overlap. However, in my experience, people frequently make themselves miserable or make choices according to lawful rules (of morality, say) - without internal experiences of pleasure in following those rules.
And it’s worse than just akrasia. If you have incoherent preferences and someone money-pumps you, then the revealed utility function is “likes running around in circles”, i.e. it isn’t even about the choices you thought you were deciding between.
Yup.
Speaking as if “everyone” has a utility function is common around here, but it makes my teeth hurt.
I agree that if you can derive from my preferences a conclusion which is judged absurd by my current preferences, that’s grounds to change my preferences. Though unless it’s a preference reversal, such a derivation usually rests on both the preferences and the decision algorithm. In this case, as long as you’re evaluating expected utility, a 1/bignum probability of +biggernum utilons is just a good deal. Afaict the nontrivial question is how to apply the thought experiment to the real world, where I don’t have perfect knowledge or well calibrated probabilities, and want my mistakes to not be catastrophic. And the answer to that might be a decision algorithm that doesn’t look exactly like expected utility maximization, but whose analogue of the utility function is still unbounded. Not that I have any more precise suggestions.
What if you aren’t balancing tiny probabilities, and Omega just gives you 80% chance of 10^^3 years and asks you if you want to pay a penny to switch to 80% chance of 10^^4 ? Assuming both of those are so far into the diminishing returns end of your bounded utility function that you see a negligible (< 20% of a penny) difference between them, that seems to me like an absurd conclusion in the other direction. Just giving up an unbounded reward is a mistake too.