When TORTURE v DUST SPECKS was discussed before, some people made suggestions along the following lines: perhaps when you do something to N people the resulting utility change only increases as fast as (something like) the smallest program it takes to output a number as big as N. (No one put it quite like that, which is perhaps just as well since I’m not sure it can be made to make sense. But, e.g., Tom McCabe proposed that if you inflict a dust speck on 3^^^3 people, the number of non-identical people suffering the dust speck will be far smaller than that, and that that greatly reduces the resulting disutility. Wei Dai made a proposal to do with discounting utilities by some sort of measure, related to algorithmic complexity. Etc.) Anyway, I mention all this because it may be more believable within a single person’s life than when aggregated over many people: while it sure seems that 10^N years of life is a whole lot better than N years when N is large, maybe for really large N that stops being true. Note that this doesn’t require bounded utilities.
If so, then it seems (to me, handwavily) like the point at which you start refusing to be led down the garden path might actually be quite early. For my part, I don’t think I’d take more than two steps down that path.
As long as your U(live n years) is unbounded, then my reductio holds. With the discounting scheme you’re proposing, Omega will need to offer you uncomputable amounts of lifespan to induce you to accept his offers, but you’ll still accept them and end up with a 1/3^^^3 chance of a finite lifespan.
How is he going to describe to me what these uncomputable amounts of lifespan are, and how will he convince me that they’re big enough to justify reducing the probability of getting them?
How is he going to describe to me what these uncomputable amounts of lifespan are, and how will he convince me that they’re big enough to justify reducing the probability of getting them?
By using non-constructive notation, like BusyBeaver(10^n). Surely you can be convinced that the smallest program it takes to output a number as big as BusyBeaver(10^n) is of size 10^n, and therefore accept a 10-fold reduction in probability to increase n by 1?
Also, if you can’t be convinced, then your utility function is effectively bounded.
Somewhere I missed something, is there something wrong with bounded utilities? Every usable solution to these manipulations of infinity get dismissed because they are bounded, if they work what is the problem?
If your utility function is in fact bounded, then there’s nothing wrong with that. But the utility function isn’t up for grabs. If I care about something without bound, then I can’t solve the dilemma by switching to a bounded utility function; that would simply make me optimize for some metric other than the one I wanted.
What does “the utility function isn’t up for grabs” mean? I think Eliezer originated that phrase, but he apparently also believes that we can be and should be persuaded by (some) moral arguments. Aren’t these two positions contradictory?
(It seems like a valid or at least coherent, and potentially persuasive, argument that unbounded utility functions lead to absurd decisions.)
A notion can be constant and yet we can learn about it.
For example: “The set of all prime numbers” is clearly unchanged by our reasoning, and yet we learn about it (whether it is finite, for example).
Kripke used (for a different purpose) the morning star and the evening star. The concepts are discovered to be the same concept (from scientific evidence).
The argument that unbounded utility functions lead to absurdity is also persuasive.
That seems to be a reasonable interpretation, but if we do interpret “the utility function isn’t up for grabs” that way, as a factual claim that each person has a utility function that can be discovered but not changed by moral arguments and reasoning, then I think it’s far from clear that the claim is true.
There could be other interpretations that may or may not be more plausible, and I’m curious what Eliezer’s own intended meaning is, as well as what pengvado meant by it.
There is a sense in which anything that makes choices does have a utility function—the utility function revealed by their choices. In this sense, for example, that akrasia doesn’t exist. People prefer to procrastinate, as revealed by their choice to procrastinate.
People frequently slip back and forth between this sense of “utility function” (a rather strange description of their behavior, whatever that is) and the utilitarian philosophers’ notions of “utility”, which have something to do with happiness/pleasure/fun. To the extent that people pursue happiness, pleasure, and fun, the two senses overlap. However, in my experience, people frequently make themselves miserable or make choices according to lawful rules (of morality, say) - without internal experiences of pleasure in following those rules.
And it’s worse than just akrasia. If you have incoherent preferences and someone money-pumps you, then the revealed utility function is “likes running around in circles”, i.e. it isn’t even about the choices you thought you were deciding between.
I agree that if you can derive from my preferences a conclusion which is judged absurd by my current preferences, that’s grounds to change my preferences. Though unless it’s a preference reversal, such a derivation usually rests on both the preferences and the decision algorithm. In this case, as long as you’re evaluating expected utility, a 1/bignum probability of +biggernum utilons is just a good deal. Afaict the nontrivial question is how to apply the thought experiment to the real world, where I don’t have perfect knowledge or well calibrated probabilities, and want my mistakes to not be catastrophic. And the answer to that might be a decision algorithm that doesn’t look exactly like expected utility maximization, but whose analogue of the utility function is still unbounded. Not that I have any more precise suggestions.
What if you aren’t balancing tiny probabilities, and Omega just gives you 80% chance of 10^^3 years and asks you if you want to pay a penny to switch to 80% chance of 10^^4 ? Assuming both of those are so far into the diminishing returns end of your bounded utility function that you see a negligible (< 20% of a penny) difference between them, that seems to me like an absurd conclusion in the other direction. Just giving up an unbounded reward is a mistake too.
… is, I am gratified to see, the same as mine.
When TORTURE v DUST SPECKS was discussed before, some people made suggestions along the following lines: perhaps when you do something to N people the resulting utility change only increases as fast as (something like) the smallest program it takes to output a number as big as N. (No one put it quite like that, which is perhaps just as well since I’m not sure it can be made to make sense. But, e.g., Tom McCabe proposed that if you inflict a dust speck on 3^^^3 people, the number of non-identical people suffering the dust speck will be far smaller than that, and that that greatly reduces the resulting disutility. Wei Dai made a proposal to do with discounting utilities by some sort of measure, related to algorithmic complexity. Etc.) Anyway, I mention all this because it may be more believable within a single person’s life than when aggregated over many people: while it sure seems that 10^N years of life is a whole lot better than N years when N is large, maybe for really large N that stops being true. Note that this doesn’t require bounded utilities.
If so, then it seems (to me, handwavily) like the point at which you start refusing to be led down the garden path might actually be quite early. For my part, I don’t think I’d take more than two steps down that path.
As long as your U(live n years) is unbounded, then my reductio holds. With the discounting scheme you’re proposing, Omega will need to offer you uncomputable amounts of lifespan to induce you to accept his offers, but you’ll still accept them and end up with a 1/3^^^3 chance of a finite lifespan.
How is he going to describe to me what these uncomputable amounts of lifespan are, and how will he convince me that they’re big enough to justify reducing the probability of getting them?
By using non-constructive notation, like BusyBeaver(10^n). Surely you can be convinced that the smallest program it takes to output a number as big as BusyBeaver(10^n) is of size 10^n, and therefore accept a 10-fold reduction in probability to increase n by 1?
Also, if you can’t be convinced, then your utility function is effectively bounded.
Somewhere I missed something, is there something wrong with bounded utilities? Every usable solution to these manipulations of infinity get dismissed because they are bounded, if they work what is the problem?
If your utility function is in fact bounded, then there’s nothing wrong with that. But the utility function isn’t up for grabs. If I care about something without bound, then I can’t solve the dilemma by switching to a bounded utility function; that would simply make me optimize for some metric other than the one I wanted.
What does “the utility function isn’t up for grabs” mean? I think Eliezer originated that phrase, but he apparently also believes that we can be and should be persuaded by (some) moral arguments. Aren’t these two positions contradictory?
(It seems like a valid or at least coherent, and potentially persuasive, argument that unbounded utility functions lead to absurd decisions.)
A notion can be constant and yet we can learn about it.
For example: “The set of all prime numbers” is clearly unchanged by our reasoning, and yet we learn about it (whether it is finite, for example). Kripke used (for a different purpose) the morning star and the evening star. The concepts are discovered to be the same concept (from scientific evidence).
The argument that unbounded utility functions lead to absurdity is also persuasive.
That seems to be a reasonable interpretation, but if we do interpret “the utility function isn’t up for grabs” that way, as a factual claim that each person has a utility function that can be discovered but not changed by moral arguments and reasoning, then I think it’s far from clear that the claim is true.
There could be other interpretations that may or may not be more plausible, and I’m curious what Eliezer’s own intended meaning is, as well as what pengvado meant by it.
There is a sense in which anything that makes choices does have a utility function—the utility function revealed by their choices. In this sense, for example, that akrasia doesn’t exist. People prefer to procrastinate, as revealed by their choice to procrastinate.
People frequently slip back and forth between this sense of “utility function” (a rather strange description of their behavior, whatever that is) and the utilitarian philosophers’ notions of “utility”, which have something to do with happiness/pleasure/fun. To the extent that people pursue happiness, pleasure, and fun, the two senses overlap. However, in my experience, people frequently make themselves miserable or make choices according to lawful rules (of morality, say) - without internal experiences of pleasure in following those rules.
And it’s worse than just akrasia. If you have incoherent preferences and someone money-pumps you, then the revealed utility function is “likes running around in circles”, i.e. it isn’t even about the choices you thought you were deciding between.
Yup.
Speaking as if “everyone” has a utility function is common around here, but it makes my teeth hurt.
I agree that if you can derive from my preferences a conclusion which is judged absurd by my current preferences, that’s grounds to change my preferences. Though unless it’s a preference reversal, such a derivation usually rests on both the preferences and the decision algorithm. In this case, as long as you’re evaluating expected utility, a 1/bignum probability of +biggernum utilons is just a good deal. Afaict the nontrivial question is how to apply the thought experiment to the real world, where I don’t have perfect knowledge or well calibrated probabilities, and want my mistakes to not be catastrophic. And the answer to that might be a decision algorithm that doesn’t look exactly like expected utility maximization, but whose analogue of the utility function is still unbounded. Not that I have any more precise suggestions.
What if you aren’t balancing tiny probabilities, and Omega just gives you 80% chance of 10^^3 years and asks you if you want to pay a penny to switch to 80% chance of 10^^4 ? Assuming both of those are so far into the diminishing returns end of your bounded utility function that you see a negligible (< 20% of a penny) difference between them, that seems to me like an absurd conclusion in the other direction. Just giving up an unbounded reward is a mistake too.