But what do you mean by “genuinely believe in unbounded utility”? Given the way Von Neumann and Morgenstern define numerical utility, unbounded utility basically just means that you have desires that lead you to keep accepting certain bets, no matter how low the probability goes. They talk about this in their work:
And yet the concept of mathematical expectation has been often questioned, and its legitimateness is certainly dependent upon some hypothesis concerning the nature of an “expectation.” Have we not then begged the question? Do not our postulates introduce, in some oblique way, the hypotheses which bring in the mathematical expectation?
More specifically: May there not exist in an individual a (positive or negative) utility of the mere act of “taking a chance,” of gambling, which the use of the mathematical expectation obliterates?
They go on to say that “we have practically defined numerical utility as being that thing for which the calculus of mathematical expectations is legitimate,” and conclude that “concepts like a ‘specific utility of gambling’ cannot be formulated free of contradiction on this level.”
Or in other words, saying that for any given utility, there exists a double utility, just does not mean anything except that there is some object such that you consider a 50% chance of it, and a 50% chance of nothing, equal in value to the first utility. In a similar way, you cannot assert that you have an unbounded utility function in their sense, unless there is some reward such that you are willing to pay $100 for objective odds of one in a googolplex of getting the reward. If you will not pay $100 for any reward whatsoever at these odds (as I will not) then your utility function is not unbounded in the Von Neumann Morgenstern sense.
This is just a mathematical fact. If you still want to say your utility function is unbounded despite not accepting any bets of this kind, then you need a new definition of utility.
People don’t have utilities; we have desires, preferences, moral sentiments, etc… and we want to (or have to) translate them into utility-equivalent formats. We also have meta-preferences that we want to respect, such as “treat the desires/happiness/value of similar beings similarly”. That leads straight to unbounded utility as the first candidate.
So I’m looking at “what utility should we choose” rather than “what utility do we have” (because we don’t have any currently).
I agree that we do not objectively have a utility function, but the kinds of things of that you say. I am simply saying that the utility function that those things most resemble is a bounded utility function, and people’s absolute refusal to do anything for the sake of an extremely small probability proves that fact.
I am not sure that the meta-preference that you mention “leads straight to unbounded utility.” However, I agree that understood in a certain way it might lead to that. But if so, it would also lead straight to accepting extremely small probabilities of extremely large rewards. I think that people’s desire to avoid the latter is stronger than their desire for the former, if they have the former at all.
I do not have that particular meta-preference because I think it is a mistaken result of a true meta-preference for being logical and reasonable. I think one can be logical and reasonable while preferring benefits that are closer to benefits that are more distant, even when those benefits are similar in themselves.
I think that people’s desire to avoid the latter is stronger than their desire for the former, if they have the former at all.
Yes, which is what my system is set up for. It allows people to respect their meta-preference, up to the extent where mugging and other issues become possible.
An alternative to bounded utility is to suppose that probabilities go to zero faster than utility. In fact, the latter is a generalisation of the former, since the former is equivalent to supposing that the probability is zero for large enough utility.
However, neither “utility is bounded” nor “probabilities go to zero faster than utility” amount to solutions to Pascal’s Mugging. They only indicate directions in which a solution might be sought. An actual solution would provide a way to calculate, respectively, the bound or the limiting form of probabilities for large utility. Otherwise, for any proposed instance of Pascal’s Mugging, there is a bound large enough, or a rate of diminution of P*U low enough, that you still have to take the bet.
Set the bound too low, or the diminution too fast (“scope insensitivity”), and you pass up some gains that some actual people think extremely worth while, such as humanity expanding across the universe instead of being limited to the Earth. Telling people they shouldn’t believe in such value, while being unable to tell them how much value they should believe in isn’t very persuasive.
This alternative only works because it asserts that such and such a bet is impossible, e.g. there is a reward that you would pay $100 for if the odds were one in a googolplex of getting the reward, but in fact the odds for that particular reward are always less than one in a googolplex.
That still requires you to bite the bullet of saying that yes, if the odds were definitely one in a googolplex, I would pay $100 for that bet.
But for me at least, there is no reward that I would pay $100 for at that odds. This means that I cannot accept your alternative. And I don’t think that there are any other real people who would consistently accept it either, not in real life, regardless of what they say in theory.
That still requires you to bite the bullet of saying that yes, if the odds were definitely one in a googolplex, I would pay $100 for that bet.
The idea of P*U tending to zero axiomatically rules out the possibility of being offered that bet, so there is no need to answer the hypothetical. No probabilities at the meta-level.
Or, if you object to the principle of no probabilities at the meta-level, the same objection can be made to bounded utility. This requires you to bite the bullet of saying that yes, if the utility really were that enormous etc.
The same applies to any axiomatic foundation for utility theory that avoids Pascal’s Mugging. You can always say, “But what if [circumstance contrary to those axioms]? Then [result those axioms rule out].”
The two responses are not equivalent. The utility in a utility function is subjective in the sense that it represents how much I care about something; and I am saying that there is literally nothing that I care enough about to pay $100 for a probability of one in a googolplex of accomplishing it. So for example if I knew for an absolute fact that for $100 I could get that probability of saving 3^^^^^^^^^3 lives, I would not do it. Saying the utility can’t be that enormous does not rule out any objective facts: it just says I don’t care that much. The only way it could turn out that “if the utility really were that enormous” would be if I started to care that much. And yes, I would pay $100 if it turned out that I was willing to pay $100. But I’m not.
Attempting to rule out a probability by axioms, on the other hand, is ruling out objective possibilities, since objective facts in the world cause probabilities. The whole purpose of your axiom is that you are unwilling to pay that $100, even if the probability really were one in a googolplex. Your probability axiom is simply not your true rejection.
Saying the utility can’t be that enormous does not rule out any objective facts: it just says I don’t care that much.
To say you don’t care that much is a claim of objective fact. People sometimes discover that they do very much care (or, if you like, change to begin to very much care) about something they did not before. For example, conversion to ethical veganism. You may say that you will never entertain enormous utility, and this claim may be true, but it is still an objective claim.
And how do you even know? No-one can exhibit their utility function, supposing they have one, nor can they choose it.
As I said, I concede that I would pay $100 for that probability of that result, if I cared enough about that result, but my best estimate of how much I care about that probability of that result is “too little to consider.” And I think that is currently the same for every other human being.
(Also, you consistently seem to be implying that “entertaining enormous utility” is something different from being willing to pay a meaningful price for small probability of something: but these are simply identical—asking whether I might objectively accept an enormous utility assignment is just the same thing as asking whether there might be some principles which would cause me to pay the price for the small probability.)
But what do you mean by “genuinely believe in unbounded utility”? Given the way Von Neumann and Morgenstern define numerical utility, unbounded utility basically just means that you have desires that lead you to keep accepting certain bets, no matter how low the probability goes. They talk about this in their work:
They go on to say that “we have practically defined numerical utility as being that thing for which the calculus of mathematical expectations is legitimate,” and conclude that “concepts like a ‘specific utility of gambling’ cannot be formulated free of contradiction on this level.”
Or in other words, saying that for any given utility, there exists a double utility, just does not mean anything except that there is some object such that you consider a 50% chance of it, and a 50% chance of nothing, equal in value to the first utility. In a similar way, you cannot assert that you have an unbounded utility function in their sense, unless there is some reward such that you are willing to pay $100 for objective odds of one in a googolplex of getting the reward. If you will not pay $100 for any reward whatsoever at these odds (as I will not) then your utility function is not unbounded in the Von Neumann Morgenstern sense.
This is just a mathematical fact. If you still want to say your utility function is unbounded despite not accepting any bets of this kind, then you need a new definition of utility.
People don’t have utilities; we have desires, preferences, moral sentiments, etc… and we want to (or have to) translate them into utility-equivalent formats. We also have meta-preferences that we want to respect, such as “treat the desires/happiness/value of similar beings similarly”. That leads straight to unbounded utility as the first candidate.
So I’m looking at “what utility should we choose” rather than “what utility do we have” (because we don’t have any currently).
I agree that we do not objectively have a utility function, but the kinds of things of that you say. I am simply saying that the utility function that those things most resemble is a bounded utility function, and people’s absolute refusal to do anything for the sake of an extremely small probability proves that fact.
I am not sure that the meta-preference that you mention “leads straight to unbounded utility.” However, I agree that understood in a certain way it might lead to that. But if so, it would also lead straight to accepting extremely small probabilities of extremely large rewards. I think that people’s desire to avoid the latter is stronger than their desire for the former, if they have the former at all.
I do not have that particular meta-preference because I think it is a mistaken result of a true meta-preference for being logical and reasonable. I think one can be logical and reasonable while preferring benefits that are closer to benefits that are more distant, even when those benefits are similar in themselves.
Yes, which is what my system is set up for. It allows people to respect their meta-preference, up to the extent where mugging and other issues become possible.
An alternative to bounded utility is to suppose that probabilities go to zero faster than utility. In fact, the latter is a generalisation of the former, since the former is equivalent to supposing that the probability is zero for large enough utility.
However, neither “utility is bounded” nor “probabilities go to zero faster than utility” amount to solutions to Pascal’s Mugging. They only indicate directions in which a solution might be sought. An actual solution would provide a way to calculate, respectively, the bound or the limiting form of probabilities for large utility. Otherwise, for any proposed instance of Pascal’s Mugging, there is a bound large enough, or a rate of diminution of P*U low enough, that you still have to take the bet.
Set the bound too low, or the diminution too fast (“scope insensitivity”), and you pass up some gains that some actual people think extremely worth while, such as humanity expanding across the universe instead of being limited to the Earth. Telling people they shouldn’t believe in such value, while being unable to tell them how much value they should believe in isn’t very persuasive.
This alternative only works because it asserts that such and such a bet is impossible, e.g. there is a reward that you would pay $100 for if the odds were one in a googolplex of getting the reward, but in fact the odds for that particular reward are always less than one in a googolplex.
That still requires you to bite the bullet of saying that yes, if the odds were definitely one in a googolplex, I would pay $100 for that bet.
But for me at least, there is no reward that I would pay $100 for at that odds. This means that I cannot accept your alternative. And I don’t think that there are any other real people who would consistently accept it either, not in real life, regardless of what they say in theory.
The idea of P*U tending to zero axiomatically rules out the possibility of being offered that bet, so there is no need to answer the hypothetical. No probabilities at the meta-level.
Or, if you object to the principle of no probabilities at the meta-level, the same objection can be made to bounded utility. This requires you to bite the bullet of saying that yes, if the utility really were that enormous etc.
The same applies to any axiomatic foundation for utility theory that avoids Pascal’s Mugging. You can always say, “But what if [circumstance contrary to those axioms]? Then [result those axioms rule out].”
The two responses are not equivalent. The utility in a utility function is subjective in the sense that it represents how much I care about something; and I am saying that there is literally nothing that I care enough about to pay $100 for a probability of one in a googolplex of accomplishing it. So for example if I knew for an absolute fact that for $100 I could get that probability of saving 3^^^^^^^^^3 lives, I would not do it. Saying the utility can’t be that enormous does not rule out any objective facts: it just says I don’t care that much. The only way it could turn out that “if the utility really were that enormous” would be if I started to care that much. And yes, I would pay $100 if it turned out that I was willing to pay $100. But I’m not.
Attempting to rule out a probability by axioms, on the other hand, is ruling out objective possibilities, since objective facts in the world cause probabilities. The whole purpose of your axiom is that you are unwilling to pay that $100, even if the probability really were one in a googolplex. Your probability axiom is simply not your true rejection.
To say you don’t care that much is a claim of objective fact. People sometimes discover that they do very much care (or, if you like, change to begin to very much care) about something they did not before. For example, conversion to ethical veganism. You may say that you will never entertain enormous utility, and this claim may be true, but it is still an objective claim.
And how do you even know? No-one can exhibit their utility function, supposing they have one, nor can they choose it.
As I said, I concede that I would pay $100 for that probability of that result, if I cared enough about that result, but my best estimate of how much I care about that probability of that result is “too little to consider.” And I think that is currently the same for every other human being.
(Also, you consistently seem to be implying that “entertaining enormous utility” is something different from being willing to pay a meaningful price for small probability of something: but these are simply identical—asking whether I might objectively accept an enormous utility assignment is just the same thing as asking whether there might be some principles which would cause me to pay the price for the small probability.)