But then you are stuck with a bound. If you ever reach it, then you suddenly stop caring about saving any more lives.
Just because there’s a bound doesn’t mean there’s a reachable bound. The range of the utility function could be bounded but open. In fact, it probably is.
I think you are suggesting an asymptotic bound. I literally discuss this just a sentence after what you are quoting.
Pascal’s mugger isn’t a problem for agents with unbounded utility functions either: they just go ahead and pay the mugger. The fact that this seems irrational to you shows that agents with unbounded utility functions seem so alien that you can’t empathize with them.
I also discussed biting the bullet on Pascal’s Mugging in the post. The problem isn’t just that you will be mugged of all of your money very quickly. It’s that expected utility doesn’t even converge. All actions have positive or negative expected utility, or are simply undefined. Increasingly improbable hypotheses utterly dominate the calculation.
I do think that agents can have unbounded preferences, without wanting to pay the mugger. Or spend resources on low probability bets in general. Median utility and some other alternatives allow this.
Because you set the bound too low. The behavior you describe is the desirable behavior when large enough numbers are involved. For example, which do you prefer? (A) 100% chance of a flourishing civilization with 3↑↑↑3 happy lives, or (B) a 99% chance of a flourishing civilization with 3↑↑↑↑3 happy lives and a 1% chance of extinction.
I currently think that 3↑↑↑↑3 happy lives with 1% chance of extinction is the correct choice. Though I’m not certain. It vastly increases the probability that a given human will find themselves living in this happy civilization, vs somewhere else.
And in this sense, human preferences can’t be bounded. Because we should always want to make the trade offs that help Big Number of humans, no matter how Big.
I think you are suggesting an asymptotic bound. I literally discuss this just a sentence after what you are quoting.
Yes, but you still criticized bounded utility functions in a way that does not apply to asymptotic bounds.
The problem isn’t just that you will be mugged of all of your money very quickly. It’s that expected utility doesn’t even converge.
Ok, that’s a good point; non-convergence is a problem for EU maximizers with unbounded utility functions. There are exactly 2 ways out of this that are consistent with the VNM assumptions:
(1) you can omit the gambles on which your utility function does not converge from the domain of your preference relation (the completeness axiom says that the preference relation is defined on all pairs of lotteries, but it doesn’t actually say what a “lottery” is, and only relies on it being possible to mix finite numbers of lotteries). If your utility function has sufficiently sharply diminishing returns, then it is possible for it to still converge on all lotteries it could possibly encounter in real life, even while not being bounded. That kind of agent will have the behavior I described in my parenthetical remark in my original comment.
(2) You can just pick some utility for the lottery, without worrying about the fact that it isn’t the expected utility of the outcome. The VNM theorem actually gives you a utility function defined directly on lotteries, rather than outcomes, in a way so that it is linear with respect to probability. Whenever you’re just looking at lotteries over finitely many outcomes, this means that you can just define the utility function on the outcomes and use expected utility, but the theorem doesn’t say anything about the utility function being continuous, and thus doesn’t say that the utility of a lottery involving infinitely many outcomes is what you think it should be from the utilities of the outcomes. I’ve never heard anyone seriously suggest this resolution though, and you can probably rule it out with a stronger continuity axiom on the preference relation.
I don’t like either of these approaches, but since realistic human utility functions are bounded anyway, it doesn’t really matter.
I do think that agents can have unbounded preferences, without wanting to pay the mugger.
Being willing to pay some Pascal’s mugger for any non-zero probability of payoff is basically what it means to have an unbounded utility function in the VNM sense.
Median utility and some other alternatives allow this.
The question of whether a given median utility maximizer has unbounded preferences doesn’t even make sense, because the utility functions that they maximize are invariant under positive monotonic transformations, so any given median-maximizing agent has preferences that can be represented both with an unbounded utility function and with a bounded one.
I currently think that 3↑↑↑↑3 happy lives with 1% chance of extinction is the correct choice. Though I’m not certain.
I’m confident that you’re wrong about your preferences under reflection, but my defense of that assertion would rely on the independence axiom, which I think I’ve already argued with you about before, and Benja also defends in a section here.
It vastly increases the probability that a given human will find themselves living in this happy civilization, vs somewhere else.
I was assuming that the humans mentioned in the problem accounted for all (or almost all) of the humans. Sorry if that wasn’t clear.
I think you are suggesting an asymptotic bound. I literally discuss this just a sentence after what you are quoting.
I also discussed biting the bullet on Pascal’s Mugging in the post. The problem isn’t just that you will be mugged of all of your money very quickly. It’s that expected utility doesn’t even converge. All actions have positive or negative expected utility, or are simply undefined. Increasingly improbable hypotheses utterly dominate the calculation.
I do think that agents can have unbounded preferences, without wanting to pay the mugger. Or spend resources on low probability bets in general. Median utility and some other alternatives allow this.
I currently think that 3↑↑↑↑3 happy lives with 1% chance of extinction is the correct choice. Though I’m not certain. It vastly increases the probability that a given human will find themselves living in this happy civilization, vs somewhere else.
And in this sense, human preferences can’t be bounded. Because we should always want to make the trade offs that help Big Number of humans, no matter how Big.
Yes, but you still criticized bounded utility functions in a way that does not apply to asymptotic bounds.
Ok, that’s a good point; non-convergence is a problem for EU maximizers with unbounded utility functions. There are exactly 2 ways out of this that are consistent with the VNM assumptions:
(1) you can omit the gambles on which your utility function does not converge from the domain of your preference relation (the completeness axiom says that the preference relation is defined on all pairs of lotteries, but it doesn’t actually say what a “lottery” is, and only relies on it being possible to mix finite numbers of lotteries). If your utility function has sufficiently sharply diminishing returns, then it is possible for it to still converge on all lotteries it could possibly encounter in real life, even while not being bounded. That kind of agent will have the behavior I described in my parenthetical remark in my original comment.
(2) You can just pick some utility for the lottery, without worrying about the fact that it isn’t the expected utility of the outcome. The VNM theorem actually gives you a utility function defined directly on lotteries, rather than outcomes, in a way so that it is linear with respect to probability. Whenever you’re just looking at lotteries over finitely many outcomes, this means that you can just define the utility function on the outcomes and use expected utility, but the theorem doesn’t say anything about the utility function being continuous, and thus doesn’t say that the utility of a lottery involving infinitely many outcomes is what you think it should be from the utilities of the outcomes. I’ve never heard anyone seriously suggest this resolution though, and you can probably rule it out with a stronger continuity axiom on the preference relation.
I don’t like either of these approaches, but since realistic human utility functions are bounded anyway, it doesn’t really matter.
Being willing to pay some Pascal’s mugger for any non-zero probability of payoff is basically what it means to have an unbounded utility function in the VNM sense.
The question of whether a given median utility maximizer has unbounded preferences doesn’t even make sense, because the utility functions that they maximize are invariant under positive monotonic transformations, so any given median-maximizing agent has preferences that can be represented both with an unbounded utility function and with a bounded one.
I’m confident that you’re wrong about your preferences under reflection, but my defense of that assertion would rely on the independence axiom, which I think I’ve already argued with you about before, and Benja also defends in a section here.
I was assuming that the humans mentioned in the problem accounted for all (or almost all) of the humans. Sorry if that wasn’t clear.