Relevant comment from the sequences (I had this in mind when writing parts of the OP but didn’t remember who wrote it, and failed to recognize the link because it was about Newcomb’s problem):
Another example: It was argued by McGee that we must adopt bounded utility functions or be subject to “Dutch books” over infinite times. But: The utility function is not up for grabs. I love life without limit or upper bound: There is no finite amount of life lived N where I would prefer a 80.0001% probability of living N years to an 0.0001% chance of living a googolplex years and an 80% chance of living forever. This is a sufficient condition to imply that my utility function is unbounded. So I just have to figure out how to optimize for that morality. You can’t tell me, first, that above all I must conform to a particular ritual of cognition, and then that, if I conform to that ritual, I must change my morality to avoid being Dutch-booked. Toss out the losing ritual; don’t change the definition of winning. That’s like deciding to prefer $1000 to $1,000,000 so that Newcomb’s Problem doesn’t make your preferred ritual of cognition look bad.
I sympathize with Eliezer’s intuition here but think he’s overstating the case. (Setting aside the fact that the exact example isn’t correct, and that McGee’s particular dutch book—at least as described by Eliezer—seems very unpersuasive.)
I don’t know if Eliezer wants to give up on unbounded utilities, utility functions over arbitrary lotteries, or on weak dominance. One of them must go, and the others are supported by pretty good intuitions. Giving up on infinite lotteries is in some sense the mildest, but given that our epistemic states are infinite lotteries this is quite a bullet! Giving up on the epistemic possibility of very large universes (as discussed by several commenters) also seems conceivable but no more comforting than giving up on the utility function.
I don’t think Eliezer’s position here is altogether different from someone who has the strong intuitions that A > B > C > A, and strong intuitions about transitivity. Faced with the incoherence such a person could just say “well my preferences are my preferences, so be it,” but I feel confident they’d be making a mistake.
This is not to say that I know the answer, and I don’t think this case is as straightforward as intransitivity. But I don’t think it’s right to glibly dismiss this kind of impossibility argument because the utility function is not up for grabs. The utility function may not be up for grabs but intuitions about it are, and logical incoherence between intuitions is real evidence about those intuitions.
I also recommend this comment thread overall. (I actually think that “lifetime / size of universe” is a pretty good direction for bounded utility functions, and perhaps that’s Eliezer’s view?)
Relevant comment from the sequences (I had this in mind when writing parts of the OP but didn’t remember who wrote it, and failed to recognize the link because it was about Newcomb’s problem):
I sympathize with Eliezer’s intuition here but think he’s overstating the case. (Setting aside the fact that the exact example isn’t correct, and that McGee’s particular dutch book—at least as described by Eliezer—seems very unpersuasive.)
I don’t know if Eliezer wants to give up on unbounded utilities, utility functions over arbitrary lotteries, or on weak dominance. One of them must go, and the others are supported by pretty good intuitions. Giving up on infinite lotteries is in some sense the mildest, but given that our epistemic states are infinite lotteries this is quite a bullet! Giving up on the epistemic possibility of very large universes (as discussed by several commenters) also seems conceivable but no more comforting than giving up on the utility function.
I don’t think Eliezer’s position here is altogether different from someone who has the strong intuitions that A > B > C > A, and strong intuitions about transitivity. Faced with the incoherence such a person could just say “well my preferences are my preferences, so be it,” but I feel confident they’d be making a mistake.
This is not to say that I know the answer, and I don’t think this case is as straightforward as intransitivity. But I don’t think it’s right to glibly dismiss this kind of impossibility argument because the utility function is not up for grabs. The utility function may not be up for grabs but intuitions about it are, and logical incoherence between intuitions is real evidence about those intuitions.
I also recommend this comment thread overall. (I actually think that “lifetime / size of universe” is a pretty good direction for bounded utility functions, and perhaps that’s Eliezer’s view?)