Naturalism versus unbounded (or unmaximisable) utility options
There are many paradoxes with unbounded utility functions. For instance, consider whether it’s rational to spend eternity in Hell:
Suppose that you die, and God offers you a deal. You can spend 1 day in Hell, and he will give you 2 days in Heaven, and then you will spend the rest of eternity in Purgatory (which is positioned exactly midway in utility between heaven and hell). You decide that it’s a good deal, and accept. At the end of your first day in Hell, God offers you the same deal: 1 extra day in Hell, and you will get 2 more days in Heaven. Again you accept. The same deal is offered at the end of the second day.
And the result is… that you spend eternity in Hell. There is never a rational moment to leave for Heaven—that decision is always dominated by the decision to stay in Hell.
Or consider a simpler paradox:
You’re immortal. Tell Omega any natural number, and he will give you that much utility. On top of that, he will give you any utility you may have lost in the decision process (such as the time wasted choosing and specifying your number). Then he departs. What number will you choose?
Again, there’s no good answer to this problem—any number you name, you could have got more by naming a higher one. And since Omega compensates you for extra effort, there’s never any reason to not name a higher number.
It seems that these are problems caused by unbounded utility. But that’s not the case, in fact! Consider:
You’re immortal. Tell Omega any real number r > 0, and he’ll give you 1-r utility. On top of that, he will give you any utility you may have lost in the decision process (such as the time wasted choosing and specifying your number). Then he departs. What number will you choose?
Again, there is not best answer—for any r, r/2 would have been better. So these problems arise not because of unbounded utility, but because of unbounded options. You have infinitely many options to choose from (sequentially in the Heaven and Hell problem, all at once in the other two) and the set of possible utilities from your choices does not possess a maximum—so there is no best choice.
What should you do? In the Heaven and Hell problem, you end up worse off if you make the locally dominant decision at each decision node—if you always choose to add an extra day in Hell, you’ll never get out of it. At some point (maybe at the very beginning), you’re going to have to give up an advantageous deal. In fact, since giving up once means you’ll never be offered the deal again, you’re going to have to give up arbitrarily much utility. Is there a way out of this conundrum?
Assume first that you’re a deterministic agent, and imagine that you’re sitting down for an hour to think about this (don’t worry, Satan can wait, he’s just warming up the pokers). Since you’re deterministic, and you know it, then your ultimate life future will be entirely determined by what you decide right now (in fact your life history is already determined, you just don’t know it yet—still, by the Markov property, your current decision also determines the future). Now, you don’t have to reach any grand decision now—you’re just deciding what you’ll do for the next hour or so. Some possible options are:
Ignore everything, sing songs to yourself.
Think about this some more, thinking of yourself as an algorithm.
Think about this some more, thinking of yourself as a collection of arguing agents.
Pick a number N, and accept all of God’s deals until day N.
Promise yourself you’ll reject all of God’s deals.
Accept God’s deal for today, hope something turns up.
Defer any decision until another hour has passed.
...
There are many other options—in fact, there are precisely as many options as you’ve considered during that hour. And, crucially, you can put an estimated expected utility to each one. For instance, you might know yourself, and suspect that you’ll always do the same thing (you have no self discipline where cake and Heaven are concerned), so any decision apart from immediately rejecting all of God’s deals will give you -∞ utility. Or maybe you know yourself, and have great self discipline and perfect precommitments- therefore if you pick a number N in the coming hour, you’ll stick to it. Thinking some more may have a certain expected utility—which may differ depending on what directions you direct your thoughts. And if you know that you can’t direct your thoughts—well then they’ll all have the same expected utility.
But notice what’s happening here: you’ve reduced the expected utility calculation over infinitely many options, to one over finitely many options—namely, all the interim decisions that you can consider in the course of an hour. Since you are deterministic, the infinitely many options don’t have an impact: whatever interim decision you follow, will uniquely determine how much utility you actually get out of this. And given finitely many options, each with expected utility, choosing one doesn’t give any paradoxes.
And note that you don’t need determinism—adding stochastic components to yourself doesn’t change anything, as you’re already using expected utility anyway. So all you need is an assumption of naturalism—that you’re subject to the laws of nature, that your decision will be the result of deterministic or stochastic processes. In other words, you don’t have ‘spooky’ free will that contradicts the laws of physics.
Of course, you might be wrong about your estimates—maybe you have more/less willpower than you initially thought. That doesn’t invalidate the model—at every hour, at every interim decision, you need to choose the option that will, in your estimation, ultimately result in the most utility (not just for the next few moments or days).
If we want to be more formal, we can say that you’re deciding on a decision policy—choosing among the different agents that you could be, the one most likely to reach high expected utility. Here are some policies you could choose from (the challenge is to find a policy that gets you the most days in Hell/Heaven, without getting stuck and going on forever):
Decide to count the days, and reject God’s deal as soon as you lose count.
Fix a probability distribution over future days, and reject God’s deal with a certain probability.
Model yourself as a finite state machine. Figure out the Busy Beaver number of that finite state machine. Reject the deal when the number of days climbs close to that.
Realise that you probably can’t compute the Busy Beaver number for yourself, and instead use some very fast growing function like the Ackermann functions instead.
Use the Ackermann function to count down the days during which you formulate a policy; after that, implement it.
Estimate that there is a non-zero probability of falling into a loop (which would give you -∞ utility), so reject God’s deal as soon as possible.
Estimate that there is a non-zero probability of accidentally telling God the wrong thing, so commit to accepting all of God’s deals (and count on accidents to rescue you from -∞ utility).
But why spend a whole hour thinking about it? Surely the same applies for half an hour, a minute, a second, a microsecond? That’s entirely a convenience choice—if you think about things in one second increments, then the interim decision “think some more” is nearly always going to be the dominant one.
The mention of the Busy Beaver number hints at a truth—given the limitations of your mind and decision abilities, there is one policy, among all possible policies that you could implement, that gives you the most utility. More complicated policies you can’t implement (which generally means you’d hit a loop and get -∞ utility), and simpler policies would give you less utility. Of course, you likely won’t find that policy, or anything close to it. It all really depends on how good your policy finding policy is (and your policy finding policy finding policy...).
That’s maybe the most important aspect of these problems: some agents are just better than others. Unlike finite cases where any agent can simply list all the options, take their time, and choose the best one, here an agent with a better decision algorithm will outperform another. Even if they start with the same resources (memory capacity, cognitive shortcuts, etc...) one may be a lot better than another. If the agents don’t acquire more resources during their time in Hell, then their maximal possible utility is related to their Busy Beaver number—basically the maximal length that a finite-state agent can survive without falling into an infinite loop. Busy Beaver numbers are extremely uncomputable, so some agents, by pure chance, may be capable of acquiring much greater utility than others. And agents that start with more resources have a much larger theoretical maximum—not fair, but deal with it. Hence it’s not really an infinite option scenario, but an infinite agent scenario, with each agent having a different maximal expected utility that they can extract from the setup.
It should be noted that God, or any being capable of hypercomputation, has real problems in these situations: they actually have infinite options (not a finite options of choosing their future policy), and so don’t have any solution available.
This is also related to theoretical maximally optimum agent that is AIXI: for any computable agent that approximates AIXI, there will be other agents that approximate it better (and hence get higher expected utility). Again, it’s not fair, but not unexpected either: smarter agents are smarter.
What to do?
This analysis doesn’t solve the vexing question of what to do—what is the right answer to these kind of problems? These depend on what type of agent you are, but what you need to do is estimate the maximal integer you are capable of computing (and storing), and endure for that many days. Certain probabilistic strategies may improve your performance further, but you have to put the effort into finding them.
- Original Research on Less Wrong by 29 Oct 2012 22:50 UTC; 48 points) (
- On the importance of taking limits: Infinite Spheres of Utility by 12 Oct 2013 21:18 UTC; 35 points) (
- Research interests I don’t currently have time to develop alone by 16 Oct 2013 10:31 UTC; 27 points) (
- When Goodharting is optimal: linear vs diminishing returns, unlikely vs likely, and other factors by 19 Dec 2019 13:55 UTC; 24 points) (
- Higher than the most high by 13 Feb 2013 16:10 UTC; 17 points) (
- Calibrating Against Undetectable Utilons and Goal Changing Events (part1) by 20 Feb 2013 9:09 UTC; 13 points) (
- Calibrating Against Undetectable Utilons and Goal Changing Events (part2and1) by 22 Feb 2013 1:09 UTC; 10 points) (
- 5 Jan 2016 18:16 UTC; 8 points) 's comment on The Number Choosing Game: Against the existence of perfect theoretical rationality by (
- 5 Jan 2016 4:22 UTC; 3 points) 's comment on The Number Choosing Game: Against the existence of perfect theoretical rationality by (
- 8 May 2013 7:41 UTC; 2 points) 's comment on Testing lords over foolish lords: gaming Pascal’s mugging by (
- 8 Jan 2016 22:18 UTC; 0 points) 's comment on The Number Choosing Game: Against the existence of perfect theoretical rationality by (
This is very good post. The real question that has not explicitly been asked is the following:
How can utility be maximised when there is no maximum utility?
The answer of course is that it can’t.
Some of the ideas that are offered as solutions or approximations of solutions are quite clever, but because for any agent you can trivially construct another agent who will perform better and there is no metrics other than utility itself for determining how much better an agent is than another agent, solutions aren’t even interesting here. Trying to find limits such as storage capacity or computing power is only avoiding the real problem.
These are simply problems that have no solutions, like the problem of finding the largest integer has no solution. You can get arbitrarily close, but that’s it.
And since I’m at it, let me quote another limitation of utility I very recently wrote about in a comment to Pinpointing Utility:
This seems like it can be treated with non-standard reals or similar.
Yeah, it can. You still run into the problem that a one in a zillion chance of actual immortality is more valuable than any amount of finite lifespan, though, so as long as the probability of actual immortality isn’t zero, chasing after it will be the only thing that guides your decision.
Actually, it seems you can solve the immortality problem in ℝ after all, you just need to do it counterintuitively: 1 day is 1, 2 days is 1.5, 3 days is 1.75, etc, immortality is 2, and then you can add quality. Not very surprising in fact, considering immortality is effectively infinity and |ℕ| < |ℝ|.
But that would mean that the utility of 50% chance of 1 day and 50% chance of 3 days is
0.5*1+0.5*1.75=1.375
, which is different from the utility of two days that you would expect.You can’t calculate utilites anyway; there’s no reason to assume that u(n days) should be 0.5 * (u(n+m days) + u(n-m days)) for any n or m. If you want to include immortality, you can’t assign utilities linearly, although you can get arbitrarily close by picking a higher factor than 0.5 as long as it’s < 1.
Atleast in surreal numbers you could have infinidesimal chance of getting a (first order) infinite life span and have it able to win or lose against finite chance of finite life. In the transition to hyperreal analysis I expect that the improved accuracy of vanishingly small chances from arbitrary small reals to actually infinidesimal values would happen at the same time as the rewards go from arbitrary large values to actual infinite amounts.
Half of any first order infinidesimal chance could have some first order infinite reward that would make it beat some finite chance of finite reward. However if we have a second order infinidesimal chance of only a first order infinite reward then it loses to any finite expected utility. Not only do you have to attend whether the chance is infinite but how infinite.
There is a difference between an infinite amount and “grows without bound”. If I mark the first order infinite with w: there is no trouble saying that a result of w+2 wins over w. Thus if the function does have a peak then it doesn’t matter how high it is whether it is w times w or w to the power of w. In order to break things you would either have to have a scenario where god offers an unspesifiedly infinidesimal chance of equal infinite heaven time or have god offer the deal unspesifiedly many times. “a lot” isn’t a number between 0 and 1 and thus not a propability. Similarly having an “unbounded amount” isn’t a spesified amount and thus not a number.
The absurdity of the situation is it being ildefined or containing other contradictiction than infinities. For if god promises me (some possibly infinite amount) of days in heaven and I never receive them then god didn’t make good on his promise. So despite gods abilities I am in the position to make him break his promise or I know beforehand that he can’t deliver the goods. If you measure on “earned days on heaven” then only the one that continually accepts wins. If you measure days spent in heaven then only actually spending them counts and having them earned doesn’t yet generate direct points. Whether or not an earned day indirectly means days spent is depenent on the ability to cash in and that is dependent on my choice. The situation doesn’t have probabilities spesified in absense of the strategy used. Therefore any agent that tries to calculate the “right odds” from the description of the problem either has to use the strategy they will formulate as a basis (and this would totally negate any usefulness of coming up with the strategy) or their analysis assumes they use a different strategy than they actually end up using. So either they have to hear god proposing the deal wrong to execute on it right or they will get it right out of luck of assuming the right thing from the start. So contemplating on this issue you either come to know that your score is lower than it could be for another agent, realise that you don’t model yourself correctly, you get max score because you guessed right or that you can’t not know what your score is. Knowing that you solved the problem right is impossible.
“These are simply problems that have no solutions, like the problem of finding the largest integer has no solution. You can get arbitrarily close, but that’s it.”—Actually, you can’t get arbitrarily close. No matter how high you go, you are still infinitely far away.
“How can utility be maximised when there is no maximum utility? The answer of course is that it can’t.”
I strongly agree with this. I wrote a post today where I came to the same conclusion, but arguably took it a step further by claiming that the immediate logical consequence is that perfect rationality does not exist, only an infinite series of better rationalities.
This isn’t a paradox about unbounded utility functions but a paradox about how to do decision theory if you expect to have to make infinitely many decisions. Because of the possible failure of the ability to exchange limits and integrals, the expected utility of a sequence of infinitely many decisions can’t in general be computed by summing up the expected utility of each decision separately.
Yes, that’s my point.
I believe it’s actually a problem about how to do utility-maximising when there’s no maximum utility, like the other problems. It’s easy to find examples for problems in which there are infinitely many decisions as well as a maximum utility, and none of those I came up with are in any way paradoxical or even difficult.
This is like the supremum-chasing Alex Mennen mentioned. It’s possible that normative rationality simply requires that your utility function satisfy the condition he mentioned, just as it requires the VNM axioms.
I’m honestly not sure. It’s a pretty disturbing situation in general.
I don’t think you need that—you can still profit from God’s offers, even without Alex Mennen’s condition.
You can profit, but that’s not the goal of normative rationality. We want to maximize utility.
I like this point of view.
ETA: A couple commenters are saying it is bad or discouraging that you can’t optimize over non-compact sets, or that this exposes a flaw in ordinary decision theory. My response is that life is like an infinitely tall drinking-glass, and you can put as much water as you like in it. You could look at the glass and say, “it will always be mostly empty”, or you could look at it and say “the glass can hold an awful lot of water”.
Yep. If I’m told “Tell Omega any real number r > 0, and he’ll give you 1-r utility”, I say “1/BusyBeaver(Graham’s number)”, cash in my utilon, and move on with my life.
This is rather tangential to the point, but I think that by refunding utility you are pretty close to smuggling in unbounded utility. I think it is better to assume away the cost.
An agent who only recognises finitely many utility levels doesn’t have this problem. However, there’s an equivalent problem for such an agent where you ask them to name a number n, and then you send them to Hell with probability 1/n and Heaven otherwise.
If it really has only finitely many utility levels, then for a sufficiently small epsilon and some even smaller delta, it will not care whether it ends up in Hell with probability epsilon or probability delta.
That’s if they only recognise finitely many expected utility levels. However, such an agent is not VNM-rational.
You could generate a random number using a distribution that has infinite expected value, then tell Omega that number. Your expected utility of following this procedure is infinite.
But if there is a non-zero chance of an Omega existing that can grant you an arbitrary amount of utility, then there must also a non-zero chance of some Omega deciding on its own at some future time to grant you a random amount of utility using the above distribution, so you’ve already got infinite expected utility, no matter what you do.
It doesn’t seem to me the third problem (“You’re immortal. Tell Omega any real number r > 0, and he’ll give you 1-r utility.”) corresponds to any real world problems, so generalizing from the first two, the problem is just the well known problem of unbounded utility function leading to infinite or divergent expected utility. I don’t understand why a lot of people seem to think very highly of this post. (What’s the relevance of using ideas related to Busy Beaver to generate large numbers, if with a simple randomized strategy, or even by doing nothing, you can get infinite expected utility?)
Can a bounded agent actually do this? I’m not entirely sure.
Even so, given any distribution f, you can generate a better (dominant) distribution by taking f and adding 1 to the result. So now, as a bounded agent, you need to choose among possible distributions—it’s the same problem again. What’s best distribution you can specify and implement, without falling into a loop or otherwise saying yes forever?
??? Your conclusion does not follow, and is irrelevant—we care about the impact of our actions, not about hypothetical gifts that may or may not happen, and are disconnected from anything we do.
First write 1 on a piece of paper. Then start flipping coins. For every head, write a 0 after the 1. If you run out of space on the paper, ask Omega for more. When you get a tail, stop and hand the pieces of paper to Omega. This has expected value of 1⁄2 1 + 1⁄4 10 + 1⁄8 * 100 + … which is infinite.
How does that relate to the claim in http://en.wikipedia.org/wiki/Turing_machine#Concurrency that “there is a bound on the size of integer that can be computed by an always-halting nondeterministic Turing machine starting on a blank tape”?
I think my procedure does not satisfy the definition of “always-halting” used in that theorem (since it doesn’t halt if you keep getting heads) even though it does halt with probability 1.
That’s probably the answer, as your solution seems solid to me.
That still doesn’t change my main point: if we posit that certain infinite expectations are better than others (St Petersburg + $1 being better that St Petersburg), you still benefit from choosing your distribution as best you can.
Can you give a mathematical definition of how to compare two infinite/divergent expectations and conclude which one is better? If you can’t, then it might be that such a notion is incoherent, and it wouldn’t make sense to posit it as an assumption. (My understanding is that people have previously assumed that it’s impossible to compare such expectations. See http://singularity.org/files/Convergence-EU.pdf for example.)
Not all infinite expectations can be compared (I believe) but there’s lots of reasonable ways that one can say that one is better than another. I’ve been working on this at the FHI, but let it slide as other things became more important.
One easy comparison device: if X and Y are random variables, you can often calculate the mean of X-Y using the Cauchy principal value (http://en.wikipedia.org/wiki/Cauchy_principal_value). If this is positive, then Y is better than X.
This gives a partial ordering on the space of distributions, so one can always climb higher within this partial ordering.
Assuming you want to eventually incorporate the idea of comparing infinite/divergent expectations into decision theory, how do you propose to choose between choices that can’t be compared with each other?
Random variables form a vector space, since X+Y and rX are both defined. Let V be this whole vector space, and let’s define a subspace W of comparable random variables. ie if X and Y are in W, then either X is better than Y, worse, or they’re equivalent. This can include many random variables with infinite or undefined means (got a bunch of ways of comparing them).
Then we simply need to select a complementary subspace W^perp in V, and claim that all random variables on it are equally worthwhile. This can be either arbitrary, or we can use other principles (there are ways of showing that even if we can’t say that Z is better than X, we can still find a Y that is worse than X but incomparable to Y).
What exactly are you doing in this step? Are you claiming that there is a unique maximal set of random variables which are all comparable, and it forms a subspace? Or are you taking an arbitrary set of mutually comparable random variables, and then picking a subspace containing it?
EDIT: the concept has become somewhat complicated to define, and needs a rethink before fomalisation, so I’m reworking this post.
The key assumption I’ll use: if X and Y are both equivalent with 0 utility, then they are equivalent with each other and with rX for all real r.
Redefine W to the space of all utility-valued random variables that are equivalent to zero utility, according to our various rules. If W is not a vector space, I extend to be so by taking any linear combinations. Let C be the line of constant-valued random variables.
Then a total order requires:
A space W’, complementary to W and C, such that all elements of W’ are defined to be equivalent with zero utility. W’ is defined up to W, and again we can extend it by linear combinations. Let U= W+W’+C. Thus V/U corresponds to random variables with infinite utility (positive or negative). Because of what we’ve done, no two elements of V/U can have the same value (if so, their difference would be in W+W’), and no two elements can differ by a real number. So a total order on V/U unambiguously gives one on V. And the total order on V/U is a bit peculiar, and non-archimedean: if X>Y>0, the X>rY for all real r. Such an order can be given (non-uniquely) by an ordered basis (or a complete flag) ).
Again, the key assumption is that if two things are equivalent to zero, they are equivalent to each other—this tends to generate subspaces.
It’s mainly the subspace part of your statement that I’m concerned about. I see no reason why the space of totally ordered random variables should be closed under taking linear combinations.
Because that’s a requirement of the approach—once it no longer holds true, we no longer increase W.
Maybe this is a better way of phrasing it: W is the space of all utility-valued random variables that have the same value as some constant (by whatever means we establish that).
Then I get linear closure by fiat or assumption: if X=c and Y=d, then X+rY=c+rd, for c, d and r constants (and overloading the = sign to mean “<= and >=”).
But my previous post was slightly incorrect—it didn’t consider infinite expectations. I will rework that a bit.
I would assume the former, using Zorn’s lemma. That doesn’t yield uniqueness, though.
The point might be that if all infinite expected utility outcomes are considered equally valuable, it doesn’t matter which strategy you follow, so long as you reach infinite expected utility, and if that includes the strategy of doing nothing in particular, all games become irrelevant.
If you don’t like comparing infinite expected outcomes (ie if you don’t think that (utility) St Petersburg + $1 is better than simply St Petersburg), then just focus on the third problem, which Wei has oddly rejected.
I’ve often stated my worry that Omega can be used to express problems that have no real-world counterpart, thus distracting our attention away from problems that actually need to be solved. As I stated at the top of this thread, it seems to me that your third problem is such a problem.
Got a different situation where you need to choose sensibly between options with infinite expectation: http://lesswrong.com/r/discussion/lw/gng/higher_than_the_most_high/
Is this a more natural setup?
Actually, the third problem is probably the most relevant of them all—it’s akin to a bounded paperclipper uncertain as to whether they’ve succeeded. Kind of like: “You get utility 1 for creating 1 paperclip and then turning yourself off (and 0 in all other situations).”
I still don’t see how it’s relevant, since I don’t see a reason why we would want to create an AI with a utility function like that. The problem goes away if we remove the “and then turning yourself off” part, right? Why would we give the AI a utility function that assigns 0 utility to an outcome where we get everything we want but it never turns itself off?
The designer of that AI might have (naively?) thought this was a clever way of solving the friendliness problem. Do the thing I want, and then make sure to never do anything again. Surely that won’t lead to the whole universe being tiled with paperclips, etc.
This can arise indirectly, or through design, or for a host of reasons. That was the first thought that popped into my mind; I’m sure other relevant examples can be had. We might not assign such a utility—then again, we (or someone) might, which makes it relevant.
Does this not mean that such a task is impossible? http://en.wikipedia.org/wiki/Non-deterministic_Turing_machine#Equivalence_with_DTMs
I remember the days when I used to consider Ackermann to be a fast-growing function.
What’s your favourite computable fast-growing function these days?
I believe I understand ordinals up to the large Veblen ordinal, so the fast-growing hierarchy for that, plus 2, of 9, or thereabouts, would be the largest computable integer I could program without consulting a reference or having to think too hard. There are much larger computable numbers I can program if I’m allowed to use the Internet to look up certain things.
I don’t expect extreme examples to lead to good guidance for non-extreme ones.
Two functions may both approach infinity, and yet have a finite ratio between them.
Hard cases make bad law.
This suggests a new explanation for the Problem of Evil: God could have created a world that had no evil and no suffering which would have been strictly better than our world, but then He could also have created a world that was strictly better than that one and so on, so He just arbitrarily picked a stopping point somewhere and we ended up with the world as we know it.
This was brought up in the recent William Craig—Rosenberg debate (don’t waste your time), the Sorites “paradox” answer to the Problem of Evil. Rosenberg called it the type of argument that gives philosophy a bad name, and acted too embarrassed by its stupidity to even state it. (Edit: changed the link)
Man, Rosenberg looked lost in that debate...
(This one versus Peter Atkins is much better; just watch the Atkins parts, Craig recites the same spiel as always.
Atkins doesn’t sugar-coat his arguments, but then again, that’s to be expected … …)
I stopped watching Craig’s debates after Kagan smoked him so thoroughly that even the steelmanned versions of his arguments sounded embarrassing. The Bradley and Parsons debates are also definitely worth listening to, if only because it’s enjoyable (and I must admit, it was quite comforting at the time) to hear Craig get demolished.
Depends on how well I can store information in hell. I imagine that hell is a little distracting.
Alternately, how reliably I can generate random numbers when being offered the deal (I’m talking to God here, not Satan, so I can trust the numbers). Then I don’t need to store much information. Whenever I lose count, I ask for a large number of dice of N sides where N is the largest number I can specify in time (there we go with bounding the options again—I’m not saying you were wrong). If they all come up 1, I take the deal. Otherwise I reset my count.
The only objections I can think of this are based on hell not providing a constant level of marginal disutility, but that’s an implicit requirement of the problem. Once I imagine hell getting more tolerable over time so the disutility only increases linearly, it seems a lot better.
Infinite utilities violate VNM-rationality. Unbounded utility functions do too, because they allow you to construct gambles that have infinite utility. For instance, if the utility function is unbounded, then there exists a sequence of outcomes such that for each n, the utility of the nth outcome is at least 2^n. Then the utility of the gamble that, for each positive integer n, gives you a 1/2^n chance of getting the nth outcome, has infinite utility.
In the case of utility functions that are bounded but do not have a maximum, the problem is not particularly worrying. If you pick a tiny amount of utility epsilon, you can ensure that you will never sacrifice more than epsilon utility. An agent that does this, while not optimal, will be pretty good provided that it actually does always choose tiny values of epsilon.
This may be one of those times where it is worth pointing out once again that if you are a utility-maximizer because you follow Savage’s axioms then you are not only a utility-maximizer[0], but a utility-maximizer with a bounded utility function.
[0]Well, except that your notion of probability need only be finitely additive.
Excellent post.
Cheers!
This may be one of those times where it is worth pointing out once again that if you are a utility-maximizer because you follow Savage’s axioms then you are not only a utility-maximizer[0], but a utility-maximizer with a bounded utility function.
[0]Well, except that your notion of probability need only be finitely additive.
Figure out that I’m not a perfectly rational agent and go on with the deal for as long as I feel like it.
Bail out when I subjectively can’t stand any more of Hell or when I’m fed up with writing lots of numbers on an impossibly long roll of paper.
Of course, these aren’t answers that help in developing a decision theory for an AI …
Enter a comment here
First, the original question seems incomplete. Presumably the alternative to accepting the deal is something better than the guaranteed hell forever, say, 50⁄50 odds of ending up in either hell or haven.
Second, the initial evaluation of utilities is based on a one-shot setup, so you effectively precommit to not accepting any new deals which screw up the original calculation, like spending an extra day in hell.
The problem starts after you took the first deal. If you cut that part of the story, then the other choice is purgatory forever.
I must be missing something. Your original calculation assumes no further (identical) deals, otherwise you would not accept the first one.
The deal is one day at a time: 1 day hell now + 2 days heaven later, then purgatory; or take your banked days in heaven and then purgatory.
At the beginning you have 0 days in heaven in the bank.
I see. Then clearly your initial evaluation of the proposed “optimal” solution (keep banking forever) is wrong, as it picks the lowest utility. As in the other examples, there is no best solution due to unboundedness, but any other choice is better than infinite banking.
I was attempting to complete the problem statement that you thought was incomplete—not to say that it was a good idea to take that path.
I thought it was incomplete? Are you saying that it can be considered complete without specifying the alternatives?
I think that sorting this muddled conversation out would not be worth the effort required.
Pure chance is one path, divine favor is another. Though I suppose to the extent divine favor depends on one’s policy bits of omega begotten of divine favor would show up as a computably-anticipatable consequence, even if omega isn’t itself computable. Still, a heuristic you didn’t mention: ask God what policy He would adopt in your place.
I’ve heard hell is pretty bad. I feel like after some amount of time in hell I would break down like people who are being tortured often do and tell God “I don’t even care, take me straight to purgatory if you have to, anything is better than this!” TBH, I feel like that might even happen at the end of the first day. (But I’d regret it forever if I never even got to check heaven out at least once.) So it seems extremely unlikely that I would ever end up “accidentally” spending an eternity in hell. d:
In all seriousness, I enjoyed the post.
Alas, the stereotypical images of Heaven and Hell aren’t perfectly setup for our thought experiments! I shall complain to the pope.
You’re taking this too literally. The point is that you’re immortal, u(day in heaven) > u(day in neither heaven nor hell) > u(day in hell), and u(2 days in heaven and 1 day in hell) > u(3 days in neither heaven nor hell).
You don’t even need hell for this sort of problems; suppose God offers you to either cash in on your days in heaven (0 at the beginning) right now or wait a day after which he will add 1 day to your bank and offer you the same deal again. How long will you wait? What if God would halve the additional time for each deal so you couldn’t even spend 2 days in heaven, but could get arbitrarily close to it?
This problem is obviously isomorphic to the previous one under the transformation r=1/s and rescaling the utility: pick a number s > 0 and rescale the utility by s/(1-r), both are valid operations on utilities.