I agree that if you can derive from my preferences a conclusion which is judged absurd by my current preferences, that’s grounds to change my preferences. Though unless it’s a preference reversal, such a derivation usually rests on both the preferences and the decision algorithm. In this case, as long as you’re evaluating expected utility, a 1/bignum probability of +biggernum utilons is just a good deal. Afaict the nontrivial question is how to apply the thought experiment to the real world, where I don’t have perfect knowledge or well calibrated probabilities, and want my mistakes to not be catastrophic. And the answer to that might be a decision algorithm that doesn’t look exactly like expected utility maximization, but whose analogue of the utility function is still unbounded. Not that I have any more precise suggestions.
What if you aren’t balancing tiny probabilities, and Omega just gives you 80% chance of 10^^3 years and asks you if you want to pay a penny to switch to 80% chance of 10^^4 ? Assuming both of those are so far into the diminishing returns end of your bounded utility function that you see a negligible (< 20% of a penny) difference between them, that seems to me like an absurd conclusion in the other direction. Just giving up an unbounded reward is a mistake too.
I agree that if you can derive from my preferences a conclusion which is judged absurd by my current preferences, that’s grounds to change my preferences. Though unless it’s a preference reversal, such a derivation usually rests on both the preferences and the decision algorithm. In this case, as long as you’re evaluating expected utility, a 1/bignum probability of +biggernum utilons is just a good deal. Afaict the nontrivial question is how to apply the thought experiment to the real world, where I don’t have perfect knowledge or well calibrated probabilities, and want my mistakes to not be catastrophic. And the answer to that might be a decision algorithm that doesn’t look exactly like expected utility maximization, but whose analogue of the utility function is still unbounded. Not that I have any more precise suggestions.
What if you aren’t balancing tiny probabilities, and Omega just gives you 80% chance of 10^^3 years and asks you if you want to pay a penny to switch to 80% chance of 10^^4 ? Assuming both of those are so far into the diminishing returns end of your bounded utility function that you see a negligible (< 20% of a penny) difference between them, that seems to me like an absurd conclusion in the other direction. Just giving up an unbounded reward is a mistake too.