This is very good post. The real question that has not explicitly been asked is the following:
How can utility be maximised when there is no maximum utility?
The answer of course is that it can’t.
Some of the ideas that are offered as solutions or approximations of solutions are quite clever, but because for any agent you can trivially construct another agent who will perform better and there is no metrics other than utility itself for determining how much better an agent is than another agent, solutions aren’t even interesting here. Trying to find limits such as storage capacity or computing power is only avoiding the real problem.
These are simply problems that have no solutions, like the problem of finding the largest integer has no solution. You can get arbitrarily close, but that’s it.
And since I’m at it, let me quote another limitation of utility I very recently wrote about in a comment to Pinpointing Utility:
Assuming you assign utility to lifetime as a function of life quality in such a way that for any constant quality longer life has strictly higher (or lower) utility than shorter life, then either you can’t assign any utility to actually infinite immortality, or you can’t differentiate between higher-quality and lower-quality immortality, or you can’t represent utility as a real number.
Assuming you assign utility to lifetime as a function of life quality in such a way that for any constant quality longer life has strictly higher (or lower) utility than shorter life, then either you can’t assign any utility to actually infinite immortality, or you can’t differentiate between higher-quality and lower-quality immortality, or you can’t represent utility as a real number.
This seems like it can be treated with non-standard reals or similar.
Yeah, it can. You still run into the problem that a one in a zillion chance of actual immortality is more valuable than any amount of finite lifespan, though, so as long as the probability of actual immortality isn’t zero, chasing after it will be the only thing that guides your decision.
Actually, it seems you can solve the immortality problem in ℝ after all, you just need to do it counterintuitively: 1 day is 1, 2 days is 1.5, 3 days is 1.75, etc, immortality is 2, and then you can add quality. Not very surprising in fact, considering immortality is effectively infinity and |ℕ| < |ℝ|.
But that would mean that the utility of 50% chance of 1 day and 50% chance of 3 days is 0.5*1+0.5*1.75=1.375, which is different from the utility of two days that you would expect.
You can’t calculate utilites anyway; there’s no reason to assume that u(n days) should be 0.5 * (u(n+m days) + u(n-m days)) for any n or m. If you want to include immortality, you can’t assign utilities linearly, although you can get arbitrarily close by picking a higher factor than 0.5 as long as it’s < 1.
Atleast in surreal numbers you could have infinidesimal chance of getting a (first order) infinite life span and have it able to win or lose against finite chance of finite life. In the transition to hyperreal analysis I expect that the improved accuracy of vanishingly small chances from arbitrary small reals to actually infinidesimal values would happen at the same time as the rewards go from arbitrary large values to actual infinite amounts.
Half of any first order infinidesimal chance could have some first order infinite reward that would make it beat some finite chance of finite reward. However if we have a second order infinidesimal chance of only a first order infinite reward then it loses to any finite expected utility. Not only do you have to attend whether the chance is infinite but how infinite.
There is a difference between an infinite amount and “grows without bound”. If I mark the first order infinite with w: there is no trouble saying that a result of w+2 wins over w. Thus if the function does have a peak then it doesn’t matter how high it is whether it is w times w or w to the power of w. In order to break things you would either have to have a scenario where god offers an unspesifiedly infinidesimal chance of equal infinite heaven time or have god offer the deal unspesifiedly many times. “a lot” isn’t a number between 0 and 1 and thus not a propability. Similarly having an “unbounded amount” isn’t a spesified amount and thus not a number.
The absurdity of the situation is it being ildefined or containing other contradictiction than infinities. For if god promises me (some possibly infinite amount) of days in heaven and I never receive them then god didn’t make good on his promise. So despite gods abilities I am in the position to make him break his promise or I know beforehand that he can’t deliver the goods. If you measure on “earned days on heaven” then only the one that continually accepts wins. If you measure days spent in heaven then only actually spending them counts and having them earned doesn’t yet generate direct points. Whether or not an earned day indirectly means days spent is depenent on the ability to cash in and that is dependent on my choice. The situation doesn’t have probabilities spesified in absense of the strategy used. Therefore any agent that tries to calculate the “right odds” from the description of the problem either has to use the strategy they will formulate as a basis (and this would totally negate any usefulness of coming up with the strategy) or their analysis assumes they use a different strategy than they actually end up using. So either they have to hear god proposing the deal wrong to execute on it right or they will get it right out of luck of assuming the right thing from the start. So contemplating on this issue you either come to know that your score is lower than it could be for another agent, realise that you don’t model yourself correctly, you get max score because you guessed right or that you can’t not know what your score is. Knowing that you solved the problem right is impossible.
“These are simply problems that have no solutions, like the problem of finding the largest integer has no solution. You can get arbitrarily close, but that’s it.”—Actually, you can’t get arbitrarily close. No matter how high you go, you are still infinitely far away.
“How can utility be maximised when there is no maximum utility? The answer of course is that it can’t.”
I strongly agree with this. I wrote a post today where I came to the same conclusion, but arguably took it a step further by claiming that the immediate logical consequence is that perfect rationality does not exist, only an infinite series of better rationalities.
This is very good post. The real question that has not explicitly been asked is the following:
How can utility be maximised when there is no maximum utility?
The answer of course is that it can’t.
Some of the ideas that are offered as solutions or approximations of solutions are quite clever, but because for any agent you can trivially construct another agent who will perform better and there is no metrics other than utility itself for determining how much better an agent is than another agent, solutions aren’t even interesting here. Trying to find limits such as storage capacity or computing power is only avoiding the real problem.
These are simply problems that have no solutions, like the problem of finding the largest integer has no solution. You can get arbitrarily close, but that’s it.
And since I’m at it, let me quote another limitation of utility I very recently wrote about in a comment to Pinpointing Utility:
This seems like it can be treated with non-standard reals or similar.
Yeah, it can. You still run into the problem that a one in a zillion chance of actual immortality is more valuable than any amount of finite lifespan, though, so as long as the probability of actual immortality isn’t zero, chasing after it will be the only thing that guides your decision.
Actually, it seems you can solve the immortality problem in ℝ after all, you just need to do it counterintuitively: 1 day is 1, 2 days is 1.5, 3 days is 1.75, etc, immortality is 2, and then you can add quality. Not very surprising in fact, considering immortality is effectively infinity and |ℕ| < |ℝ|.
But that would mean that the utility of 50% chance of 1 day and 50% chance of 3 days is
0.5*1+0.5*1.75=1.375
, which is different from the utility of two days that you would expect.You can’t calculate utilites anyway; there’s no reason to assume that u(n days) should be 0.5 * (u(n+m days) + u(n-m days)) for any n or m. If you want to include immortality, you can’t assign utilities linearly, although you can get arbitrarily close by picking a higher factor than 0.5 as long as it’s < 1.
Atleast in surreal numbers you could have infinidesimal chance of getting a (first order) infinite life span and have it able to win or lose against finite chance of finite life. In the transition to hyperreal analysis I expect that the improved accuracy of vanishingly small chances from arbitrary small reals to actually infinidesimal values would happen at the same time as the rewards go from arbitrary large values to actual infinite amounts.
Half of any first order infinidesimal chance could have some first order infinite reward that would make it beat some finite chance of finite reward. However if we have a second order infinidesimal chance of only a first order infinite reward then it loses to any finite expected utility. Not only do you have to attend whether the chance is infinite but how infinite.
There is a difference between an infinite amount and “grows without bound”. If I mark the first order infinite with w: there is no trouble saying that a result of w+2 wins over w. Thus if the function does have a peak then it doesn’t matter how high it is whether it is w times w or w to the power of w. In order to break things you would either have to have a scenario where god offers an unspesifiedly infinidesimal chance of equal infinite heaven time or have god offer the deal unspesifiedly many times. “a lot” isn’t a number between 0 and 1 and thus not a propability. Similarly having an “unbounded amount” isn’t a spesified amount and thus not a number.
The absurdity of the situation is it being ildefined or containing other contradictiction than infinities. For if god promises me (some possibly infinite amount) of days in heaven and I never receive them then god didn’t make good on his promise. So despite gods abilities I am in the position to make him break his promise or I know beforehand that he can’t deliver the goods. If you measure on “earned days on heaven” then only the one that continually accepts wins. If you measure days spent in heaven then only actually spending them counts and having them earned doesn’t yet generate direct points. Whether or not an earned day indirectly means days spent is depenent on the ability to cash in and that is dependent on my choice. The situation doesn’t have probabilities spesified in absense of the strategy used. Therefore any agent that tries to calculate the “right odds” from the description of the problem either has to use the strategy they will formulate as a basis (and this would totally negate any usefulness of coming up with the strategy) or their analysis assumes they use a different strategy than they actually end up using. So either they have to hear god proposing the deal wrong to execute on it right or they will get it right out of luck of assuming the right thing from the start. So contemplating on this issue you either come to know that your score is lower than it could be for another agent, realise that you don’t model yourself correctly, you get max score because you guessed right or that you can’t not know what your score is. Knowing that you solved the problem right is impossible.
“These are simply problems that have no solutions, like the problem of finding the largest integer has no solution. You can get arbitrarily close, but that’s it.”—Actually, you can’t get arbitrarily close. No matter how high you go, you are still infinitely far away.
“How can utility be maximised when there is no maximum utility? The answer of course is that it can’t.”
I strongly agree with this. I wrote a post today where I came to the same conclusion, but arguably took it a step further by claiming that the immediate logical consequence is that perfect rationality does not exist, only an infinite series of better rationalities.