Obviously to really put the idea of people having bounded utility functions to the test, you have to forget about it solving problems of small probabilities and incredibly good outcomes and focus on the most unintuitive consequences of it. For one, having a bounded utility function means caring arbitrarily little about differences between the goodness of different sufficiently good outcomes. And all the outcomes could be certain too. You could come up with all kinds of thought experiments involving purchasing huge numbers of years happy life or some other good for a few cents. You know all of this so I wonder why you don’t talk about it.
Also I believe that Eliezer thinks that an unbounded utility function describes at least his preferences. I remember he made a comment about caring about new happy years of life no matter how many he’d already been granted.
(I haven’t read most of the discussion in this thread or might just be missing something so this might be irrelevant.)
As far as I know the strongest version of this argument is Benja’s, here (which incidentally seems to deserve many more upvotes than it got).
Benja’s scenario isn’t a problem for normal people though, who are not reflectively consistent and whose preferences manifestly change over time.
Beyond that, it seems like people’s preferences regarding the lifespan dilemma are somewhat confusing and probably inconsistent, much like their preferences regarding the repugnant conclusion. But that seems mostly orthogonal to pascal’s mugging, and the basic point—having unbounded utility by definition means you are willing to accept negligible chances of sufficiently good outcomes against probability nearly 1 of any fixed bad outcome, so if you object to the latter you are just objecting to unbounded utility.
I agree I was being uncharitable towards Eliezer. But it is true that at the end of this post he was suggesting giving up on unbounded utility, and that everyone in this crowd seems to ultimately take that route.
Obviously to really put the idea of people having bounded utility functions to the test, you have to forget about it solving problems of small probabilities and incredibly good outcomes and focus on the most unintuitive consequences of it. For one, having a bounded utility function means caring arbitrarily little about differences between the goodness of different sufficiently good outcomes. And all the outcomes could be certain too. You could come up with all kinds of thought experiments involving purchasing huge numbers of years happy life or some other good for a few cents. You know all of this so I wonder why you don’t talk about it.
Also I believe that Eliezer thinks that an unbounded utility function describes at least his preferences. I remember he made a comment about caring about new happy years of life no matter how many he’d already been granted.
(I haven’t read most of the discussion in this thread or might just be missing something so this might be irrelevant.)
As far as I know the strongest version of this argument is Benja’s, here (which incidentally seems to deserve many more upvotes than it got).
Benja’s scenario isn’t a problem for normal people though, who are not reflectively consistent and whose preferences manifestly change over time.
Beyond that, it seems like people’s preferences regarding the lifespan dilemma are somewhat confusing and probably inconsistent, much like their preferences regarding the repugnant conclusion. But that seems mostly orthogonal to pascal’s mugging, and the basic point—having unbounded utility by definition means you are willing to accept negligible chances of sufficiently good outcomes against probability nearly 1 of any fixed bad outcome, so if you object to the latter you are just objecting to unbounded utility.
I agree I was being uncharitable towards Eliezer. But it is true that at the end of this post he was suggesting giving up on unbounded utility, and that everyone in this crowd seems to ultimately take that route.