Many worlds have nothing to do with validity of suicidal decisions. If you have an answer that maximizes expected utility but gives almost-certain probability of total failure, you still take it in a deterministic world. There is no magic by which deterministic world declares that the decision-theoretic calculation is invalid in this particular case, while many-worlds lets it be.
I think you’re right. Would you agree that this is a problem with following the policy of maximizing expected utility? Or would you keep drawing cards?
Thanks for the link—this is another form of the same paradox orthnormal linked to, yes. The Wikipedia page proposes numerous “solutions”, but most of them just dodge the question by taking advantage of the fact that the paradox was posed using “ducats” instead of “utility”. It seems like the notion of “utility” was invented in response to this paradox. If you pose it again using the word “utility”, these “solutions” fail. The only possibly workable solution offered on that Wikipedia page is:
Rejection of mathematical expectation
Various authors, including Jean le Rond d’Alembert and John Maynard Keynes, have rejected maximization of expectation (even of utility) as a proper rule of conduct. Keynes, in particular, insisted that the relative risk of an alternative could be sufficiently high to reject it even were its expectation enormous.
The page notes the reformulation in terms of utility, which it terms “super St. Petersberg paradox”. (It doesn’t have its own section, or I’d have linked directly to that.) I agree that there doesn’t seem to be a workable solution—my last refuge was just destroyed by Vladimir Nesov.
I agree that there doesn’t seem to be a workable solution—my last refuge was just destroyed by Vladimir Nesov.
I’m afraid I don’t understand the difficulty here. Let’s assume that Omega can access any point in configuration space and make that the reality. Then either (A) at some point it runs out of things with which to entice you to draw another card, in which case your utility function is bounded or (B) it never runs out of such things, in which case your utility function in unbounded.
I guess no more than 10 cards. That’s based on not being able to imagine a scenario such that I’d prefer .999 probability of death + .001 probability of scenario to the status quo. But it’s just a guess because Omega might have better imagination that I do, or understand my utility function better than I do.
Omega offers you the healing of all the rest of Reality; every other sentient being will be preserved at what would otherwise be death and allowed to live and grow forever, and all unbearable suffering not already in your causal past will be prevented. You alone will die.
You wouldn’t take a trustworthy 0.001 probability of that reward and a 0.999 probability of death, over the status quo? I would go for it so fast that there’d be speed lines on my quarks.
Really, this whole debate is just about people being told “X utilons” and interpreting utility as having diminishing marginal utility—I don’t see any reason to suppose there’s more to it than that.
There’s no reason for Omega to kill me in the winning outcome...
You wouldn’t take a trustworthy 0.001 probability of that reward and a 0.999 probability of death, over the status quo?
Well, I’m not as altruistic as you are. But there must be some positive X such that even you wouldn’t take a trustworthy X probability of that reward and a 1-X probability of death, over the status quo, right? Suppose you’ve drawn enough cards to win this prize, what new prize can Omega offer you to entice you to draw another card?
There’s no reason for Omega to kill me in the winning outcome...
Omega’s a bastard. So what?
Well, I’m not as altruistic as you are.
WHAT? Are you honestly sure you’re THAT not as altruistic as I am?
But there must be some positive X such that even you wouldn’t take a trustworthy X probability of that reward and a 1-X probability of death, over the status quo, right?
There’s the problem of whether the scenario I described which involves a “forever” and “over all space” actually has infinite utility compared to increments in my own life which even if I would otherwise live forever would be over an infinitesimal fraction of all space, but if we fix that with a rather smaller prize that I would still accept, then yes of course.
Suppose you’ve drawn enough cards to win this prize, what new prize can Omega offer you to entice you to draw another card?
That’s fine, I just didn’t know if that detail had some implication that I was missing.
WHAT? Are you honestly sure you’re THAT not as altruistic as I am?
Yes, I’m pretty sure, although I leave open the possibility that I may encounter an argument in the future that would persuade me to change my mind. My understanding is that most people have preferences like mine, so I’m surprised that you’re so surprised.
It seems that I had missed the earlierposts on bounded vs. unbounded utility functions. I’ll follow up there to avoid retreading old ground.
Yes, I’m pretty sure, although I leave open the possibility that I may encounter an argument in the future that would persuade me to change my mind. My understanding is that most people have preferences like mine, so I’m surprised that you’re so surprised.
I’m shocked, and I hadn’t thought that most people had preferences like yours—at least would not verbally express such preferences; their “real” preferences being a whole separate moral issue beyond that. I would have thought that it would be mainly psychopaths, the Rand-damaged, and a few unfortunate moral philosophers with mistaken metaethics, who would decline that offer.
I guess I would follow up with these questions: (1) When you see someone else hurting, or attend a friend’s funeral, do you feel sad; (2) are you more viscerally afraid of your own death than the strength of that emotion, if comparing two single cases; (3) do you decline to multiply out of a deliberate belief that all events after your own death ought to have zero utility to you, even if they feel sad when you think about them now; or (4) do you just generally want to leave the intuitive judgment (2) with its innate lack of multiplication undisturbed?
Or if I’m asking the wrong questions here, then what is going on? I would expect most humans to instinctively feel that their whole tribe, to say nothing of the entire rest of reality, was worth something; and I would expect a rationalist to understand that if their own life does not literally have lexicographic priority (i.e., lives of others have infinitesimal=0 value in the utility function) then the multiplication factor here is overwhelming; and I would also expect you, Wei Dai, to not mistakenly believe that you were rationally forced to be lexicographically selfish regardless of your feelings… so I’m really not clear on what could be going on here.
I guess my most important question would be: Do you feel that way, or are you deciding that way? In the former case, I might just need to make a movie showing one individual after another being healed, and after you’d seen enough of them, you would agree—the visceral emotional force having become great enough. In the latter case I’m not sure what’s going on.
PS again: Would you accept a 60% probability of death in exchange for healing the rest of reality?
I guess I would follow up with these questions: (1) When you see someone else hurting, or attend a friend’s funeral, do you feel sad; (2) are you more viscerally afraid of your own death than the strength of that emotion, if comparing two single cases; (3) do you decline to multiply out of a deliberate belief that all events after your own death ought to have zero utility to you, even if they feel sad when you think about them now; or (4) do you just generally want to leave the intuitive judgment (2) with its innate lack of multiplication undisturbed?
1: Yes. 2: Yes. 3: No. 4: I see a number of reasons not to do straight multiplication:
Straight multiplication leads to an absurd degree of unconcern for oneself, given that the number of potential people is astronomical. It means, for example, that you can’t watch a movie for enjoyment, unless that somehow increases your productivity for saving the world. (In the least convenient world, watching movies uses up time without increasing productivity.)
No one has proposed a form of utilitarianism that is free from paradoxes (e.g., the Repugnant Conclusion).
Proximity argument: don’t ask me to value strangers equally to friends and relatives. If each additional person matters 1% less than the previous one, then even an infinite number of people getting dust specks in their eyes adds up to a finite and not especially large amount of suffering.
This agrees with my intuitive judgment and also seems to have relatively few philosophical problems, compared to valuing everyone equally without any kind of discounting.
I guess my most important question would be: Do you feel that way, or are you deciding that way?
My last bullet above already answered this, but I’ll repeat for clarification: it’s both.
PS again: Would you accept a 60% probability of death in exchange for healing the rest of reality?
This should be clear from my answers above as well, but yes.
Oh, ’ello. Glad to see somebody still remembers the proximity argument. But it’s adapted to our world where you generally cannot kill a million distant people to make one close relative happy. If we move to a world where Omegas regularly ask people difficult questions, a lot of people adopting proximity reasoning will cause a huge tragedy of the commons.
About Eliezer’s question, I’d exchange my life for a reliable 0.001 chance of healing reality, because I can’t imagine living meaningfully after being offered such a wager and refusing it. Can’t imagine how I’d look other LW users in the eye, that’s for sure.
Can’t imagine how I’d look other LW users in the eye, that’s for sure.
I publicly rejected the offer, and don’t feel like a pariah here. I wonder what is the actual degree of altruism among LW users. Should we set up a poll and gather some evidence?
Cooperation is a different consideration from preference. You can prefer only to keep your own “body” in certain dynamics, no matter what happens to the rest of the world, and still benefit the most from, roughly speaking, helping other agents. Which can include occasional self-sacrifice a la counterfactual mugging.
No, if my guess is correct, then some time before I’m offered the 11th card, Omega will say “I can’t double your utility again” or equivalently, “There is no prize I can offer you such that you’d prefer a .5 probability of it to keeping what you have.”
After further thought, I see that case (B) can be quite paradoxical. Consider Eliezer’s utility function, which is supposedly unbounded as a function of how many years he lives. In other words, Omega can increase Eliezer’s utility without bound just by giving him increasingly longer lives. Expected utility maximization then dictates that he keeps drawing cards one after another, even though he knows that by doing so, with probability 1 he won’t live to enjoy his rewards.
When you go to infinity, you’d need to define additional mathematical structure that answers your question. You can’t just conclude that the correct course of action is to keep drawing cards for eternity, doing nothing else. Even if at each moment the right action is to draw one more card, when you consider the overall strategy, the strategy of drawing cards for all time may be a wrong strategy.
For example, consider the following preference on infinite strings. A string has utility 0, unless it has the form 11111.....11112222...., that is a finite number of 1 followed by infinite number of 2, in which case its utility is the number of 1s. Clearly, a string of this form with one more 1 has higher utility than a string without, and so a string with one more 1 should be preferred. But a string consisting only of 1s doesn’t have the non-zero-utility form, because it doesn’t have the tail of infinite number of 2s. It’s a fallacy to follow an incremental argument to infinity. Instead, one must follow a one-step argument that considers the infinite objects as whole.
What you say sounds reasonable, but I’m not sure how I can apply it in this example. Can you elaborate?
Consider Eliezer’s choice of strategies at the beginning of the game. He can either stop after drawing n cards for some integer n, or draw an infinite number of cards. First, (supposing it takes 10 seconds to draw a card)
EU(draw an infinite number of cards)
= 1⁄2 U(live 10 seconds) + 1⁄4 U(live 20 seconds) + 1⁄8 U(live 30 seconds) …
which obviously converges to a small number. On the other hand, EU(stop after n+1 cards) > EU(stop after n cards) for all n. So what should he do?
This exposes a hole in the problem statement: what does the Omega’s prize measure? We determined that U0 is the counterfactual where Omega kills you, U1 is the counterfactual where it does nothing, but what is U2=U1+3*(U1-U0)? This seems to be the expected utility of the event where you draw the lucky card, in which case this event contains, in particular, your future decisions to continue drawing cards. But if it’s so, it places a limit on how your utility can be improved further during the latter rounds, since if your utility continues to increase, it contradicts the statement in the first round that your utility is going to be only U2, and no more. Utility can’t change, as each utility is a valuation of a specific event in the sample space.
So, the alternative formulation that removes this contradiction is for Omega to only assert that the expected utility given that you receive a lucky card is no less than U2. In this case the right strategy seems to be continue drawing cards indefinitely, since the utility you receive could be in something other than your own life, now spent drawing cards only.
This however seems to sidestep the issue. What if the only utility you see is in the future actions you do, which don’t include picking cards, and you can’t interleave cards with other actions, that is you must allot a given amount of time to picking cards.
You can recast the problem of choosing each of the infinite number of decisions (or one among all available in some sense infinite sequences of decisions) to the problem of choosing a finite “seed” strategy for making decisions. Say, only a finite number of strategies is available, for example only what fits in the memory of the computer that starts the enterprise, that could since the start of the experiment be expanded, but the first version has a specified limit. In this case, the right program is as close to Busy Beaver is you can get, that is you draw cards as long as possible, but only finitely long, and after that you stop and go on to enjoy the actual life.
Why are you treating time as infinite? Surely it’s finite, just taking unbounded values?
Even if at each moment the right action is to draw one more card, when you consider the overall strategy, the strategy of drawing cards for all time may be a wrong strategy.
But you’re not asked to decide a strategy for all of time. You can change your decision at every round freely.
But you’re not asked to decide a strategy for all of time. You can change your decision at every round freely.
You can’t change any fixed thing, you can only determine it. Change is a timeful concept. Change appears when you compare now and tomorrow, not when you compare the same thing with itself. You can’t change the past, and you can’t change the future. What you can change about the future is your plan for the future, or your knowledge: as the time goes on, your idea about a fact in the now becomes a different idea tomorrow.
When you “change” your strategy, what you are really doing is changing your mind about what you’re planning. The question you are trying to answer is what to actually do, what decisions to implement at each point. A strategy for all time is a generator of decisions at each given moment, an algorithm that runs and outputs a stream of decisions. If you know something about each particular decision, you can make a general statement about the whole stream. If you know that each next decision is going to be “accept” as opposed to “decline”, you can prove that the resulting stream is equivalent to an infinite stream that only answers “accept”, at all steps. And at the end, you have a process, the consequences of your decision-making algorithm consist in all of the decisions. You can’t change that consequence, as the consequence is what actually happens, if you changed your mind about making a particular decision along the way, the effect of that change is already factored in in the resulting stream of actions.
The consequentialist preference is going to compare the effect of the whole infinite stream of potential decisions, and until you know about the finiteness of the future, the state space is going to contain elements corresponding to the infinite decision traces. In this state space, there is an infinite stream corresponding to one deciding to continue picking cards for eternity.
I’m more or less talking just about infinite streams, which is a well-known structure in math. You can try looking at the following references. Or find something else.
P. Cousot & R. Cousot (1992). `Inductive definitions, semantics and abstract interpretations’. In POPL ’92: Proceedings of the 19th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, pp. 83-94, New York, NY, USA. ACM. http://www.di.ens.fr/~cousot/COUSOTpapers/POPL92.shtml
Does Omega’s utility doubling cover the contents of the as-yet-untouched deck? It seems to me that it’d be pretty spiffy re: my utility function for the deck to have a reduced chance of killing me.
At first I thought this was pretty funny, but even if you were joking, it may actually map to the extinction problem, since each new technology has a chance of making extinction less likely, as well. As an example, nuclear technology had some probability of killing everyone, but also some probability of making Orion ships possible, allowing diaspora.
While I’m gaming the system, my lifetime utility function (if I have one) could probably be doubled by giving me a reasonable suite of superpowers, some of which would let me identify the rest of the cards in the deck (X-ray vision, precog powers, etc.) or be protected from whatever mechanism the skull cards use to kill me (immunity to electricity or just straight-up invulnerability). Is it a stipulation of the scenario that nothing Omega does to tweak the utility function upon drawing a star affects the risks of drawing from the deck, directly or indirectly?
It should be, especially since the existential-risk problems that we’re trying to model aren’t known to come with superpowers or other such escape hatches.
Yeesh. I’m changing my mind again tonight. My only excuse is that I’m sick, so I’m not thinking as straight as I might.
I was originally thinking that Vladimir Nesov’s reformulation showed that I would always accept Omega’s wager. But now I see that at some point U1+3*(U1-U0) must exceed any upper bound (assuming I survive that long).
Given U1 (utility of refusing initial wager), U0 (utility of death), U_max, and U_n (utility of refusing wager n assuming you survive that long), it might be possible that there is a sequence of wagers that (i) offer positive expected utility at each step; (ii) asymptotically approach the upper bound if you survive; and (iii) have a probability of survival approaching zero. I confess I’m in no state to cope with the math necessary to give such a sequence or disprove its existence.
In order for wager n to be nonnegative expected utility, P(death)*U_0 + (1-P(death))*U_(n+1) >= U_n.
Equivalently, P(death this time | survived until n) ⇐ (U_(n+1)-U_n) / (U_(n+1)-U0).
Assume the worst case, equality. Then the cumulative probability of survival decreases by exactly the same factor as your utility (conditioned on survival) increases. This is simple multiplication, so it’s true of a sequence of borderline wagers too.
With a bounded utility function, the worst sequence of wagers you’ll accept in total is P(death) ⇐ (U_max-U0)/(U1-U0). Which is exactly what you’d expect.
When there’s an infinite number of wagers, there can be a distinction between accepting the whole sequence at one go and accepting each wager one after another. (There’s a paradox associated with this distinction, but I forget what it’s called.) Your second-last sentence seems to be a conclusion about accepting the whole sequence at one go, but I’m worried about accepting each wager one after another. Is the distinction important here?
A bounded utility function probably gets you out of all problems along those lines.
Certainly it’s good in the particular case: your expected utility (in the appropriate sense) is an increasing function of bets you accept and increasing sequences don’t have convergence issues.
How would you bound your utility function? Just pick some arbitrary converging function f, and set utility’ = f(utility)? That seems arbitrary. I suspect it might also make theorems about expectation maximization break down.
No, I’m not advocating changing utility functions. I’m just saying that if your utility function is bounded, you don’t have either of these problems with infinity. You don’t have the convergence problem nor the original problem of probability of the good outcome going to zero. Of course, you still have the result that you keep making bets till your utility is maxed out with very low probability, which bothers some people.
If the sequence exists, then the paradox* persists even in the face of bounded utility functions. (Or possibly it already persists, as Vladimir Nesov argued and you agreed, but my cold-virus-addled wits aren’t sharp enough to see it.)
* The paradox is that each wager has positive expected utility, but accepting all wagers leads to death almost surely.
In my opinion, LWers should not give expected utility maximization the same axiomatic status that they award consequentialism. Is this worth a top level post?
There is a model which is standard in economics which say “people maximize expected utility; risk averseness arises because utility functions are concave”. This has always struck me as extremely fishy, for two reasons: (a) it gives rise to paradoxes like this, and (b) it doesn’t at all match what making a choice feels like for me: if someone offers me a risky bet, I feel inclined to reject it because it is risky, not because I have done some extensive integration of my utility function over all possible outcomes. So it seems a much safer assumption to just assume that people’s preferences are a function from probability distributions of outcomes, rather than making the more restrictive assumption that that function has to arise as an integral over utilities of individual outcomes.
So why is the “expected utility” model so popular? A couple of months ago I came across a blog-post which provides one clue: it pointed out that standard zero-sum game theory works when players maximize expected utility, but does not work if they have preferences about probability distributions of outcomes (since then introducing mixed strategies won’t work).
So an economist who wants to apply game theory will be inclined to assume that actors are maximizing expected utility; but we LWers shouldn’t necessarily.
There is a model which is standard in economics which say “people maximize expected utility; risk averseness arises because utility functions are convex”.
Do you mean concave?
A couple of months ago I came across a blog-post which provides one clue: it pointed out that standard zero-sum game theory works when players maximize expected utility, but does not work if they have preferences about probability distributions of outcomes (since then introducing mixed strategies won’t work).
Technically speaking, isn’t maximizing expected utility a special case of having preferences about probability distributions about outcomes? So maybe you should instead say “does not work elegantly if they have arbitrary preferences about probability distributions.”
This is what I tend to do when I’m having conversations in real life; let’s see how it works online :-)
I think I and John Maxwell IV mean the same thing, but here is the way I would phrase it. Suppose someone is offering me the pick a ticket for one of a range of different lotteries. Each lottery offers the same set of prizes, but depending on which lottery I participate in, the probability of winning them is different.
I am an agent, and we assume I have a preference order on the lotteries—e.g. which ticket I want the most, which ticket I want the least, and which tickets I am indifferent between. The action that will be rational for me to take depends on which ticket I want.
I am saying that a general theory of rational action should deal with arbitrary preference orders for the tickets. The more standard theory restricts attention to preference orders that arise from first assigning a utility value to each prize and then computing the expected utility for each ticket.
Let’s define an “experiment” as something that randomly changes an agent’s utility based on some probability density function. An agent’s “desire” for a given experiment is the amount of utility Y such that the agent is indifferent between the experiment occurring and having their utility changed by Y.
From Pfft we see that economists assume that for any given agent and any given experiment, the agent’s desire for the experiment is equal to
dx), where x is an amount of utility and f(x) gives the probability that the experiment’s outcome will be changing the agent’s utility by x. In other words, economists assume that agents desire experiments according to their expectation, which is not necessarily a good assumption.
Hmm… I hope you interpret your own words so that what you write comes out correct, your language is imprecise and at first I didn’t see a way to read what you wrote that made sense.
When I reread your comment to which I asked my question with this new perspective, the question disappeared. By “preference about probability distributions” you simply mean preference over events, that doesn’t necessarily satisfy expected utility axioms.
ETA: Note that in this case, there isn’t necessarily a way of assigning (subjective) probabilities, as subjective probabilities follow from preferences, but only if the preferences are of the right form. Thus, saying that those not-expected-utility preferences are over probability distributions is more conceptually problematic than saying that they are over events. If you don’t use probabilities in the decision algorithm, probabilities don’t mean anything.
Hmm… I hope you interpret your own words so that what you write comes out correct, your language is imprecise and at first I didn’t see a way to read what you wrote that made sense.
I am eager to improve. Please give specific suggestions.
By “preference about probability distributions” you simply mean preference over events, that doesn’t necessarily satisfy expected utility axioms.
Right.
Note that in this case, there isn’t necessarily a way of assigning (subjective) probabilities, as subjective probabilities follow from preferences, but only if the preferences are of the right form.
Hm? I thought subjective probabilities followed from prior probabilities and observed evidence and stuff. What do preferences have to do with them?
Thus, saying that those not-expected-utility preferences are over probability distributions is more conceptually problematic than saying that they are over events.
Are you using my technical definition of event or the standard definition?
Probably I should not have redefined “event”; I now see that my use is nonstandard. Hopefully I can clarify things. Let’s say I am going to roll a die and give you a number of dollars equal to the number of spots on the face left pointing upward. According to my (poorly chosen) use of the word “event”, the process of rolling the die is an “event”. According to what I suspect the standard definition is, the die landing with 4 spots face up would be an “event”. To clear things up, I suggest that we refer to the rolling of the die as an “experiment”, and 4 spots landing face up as an “outcome”. I’m going to rewrite my comment with this new terminology. I’m also replacing “value” with “desire”, for what it’s worth.
If you don’t use probabilities in the decision algorithm, probabilities don’t mean anything.
The way I want to evaluate the desirability of an experiment is more complicated than simply computing its expected value. But I still use probabilities. I would not give Pascal’s mugger any money. I would think very carefully about an experiment that had a 99% probability of getting me killed and a 1% probability of generating 101 times as much utility as I expect to generate in my lifetime, whereas a perfect expected utility maximizer would take this deal in an instant. Etc.
Roughly speaking, event is a set of alternative possibilities. So, the whole roll of a die is an event (set of all possible outcomes of a roll), as well as specific outcomes (sets that contain a single outcome). See probability space for a more detailed definition.
One way of defining prior and utility is just by first taking a preference over the events of sample space, and then choosing any pair prior+utility such that expected utility calculated from them induces the same order on events. Of course, the original order on events has to be “nice” in some sense for it to be possible to find prior+utility that have this property.
Any observations and updating consist in choosing what events you work with. Once prior is fixed, it never changes.
(Of course, you should read up on the subject in greater detail than I hint at.)
One way of defining prior and utility is just by first taking a preference over the events of sample space, and then choosing any pair prior+utility such that expected utility calculated from them induces the same order on events.
Um, isn’t that obviously wrong? It sounds like your are suggesting that we say “I like playing blackjack better than playing the lottery, so I should choose a prior probability of winning each and a utility associated with winning each so that that preference will remain consistent when I switch from ‘preference mode’ to ‘utilitarian mode’.” Wouldn’t it be better to choose the utilities of winning based on the prizes they give? And choose the priors for each based on studying the history of each game carefully?
Any observations and updating consist in choosing what events you work with. Once prior is fixed, it never changes.
Events are sets of outcomes, right? It sounds like you are suggesting that people update their probabilities by reshuffling which outcomes go with which events. Aren’t events just a layer of formality over outcomes? Isn’t real learning what happens when you change your estimations of the probabilities of outcomes, not when you reclassify them?
It almost seems to me as if we are talking past each other… I think I need a better background on this stuff. Can you recommend any books that explain probability for the layman? I already read a large section of one, but apparently it wasn’t very good...
Although I do think there is a chance you are wrong. I see you mixing up outcome-desirability estimates with chance-of-outcome estimates, which seems obviously bad.
If you don’t want the choice of preference to turn out bad for you, choose good preference ;-) There is no freedom in choosing your preference, as the “choice” is itself a decision-concept, defined in terms of preference, and can’t be a party to the definition of preference. When you are speaking of a particular choice of preference being bad or foolish, you are judging this choice from the reference frame of some other preference, while with preference as foundation of decision-making, you can’t go through this step. It really is that arbitrary. See also: Priors as Mathematical Objects, Probability is Subjectively Objective.
You are confusing probability space and its prior (the fundamental structure that bind the rest together) with random variables and their probability distributions (things that are based on probability space and that “interact” with each other through the definition in terms of the common probability space, restricted to common events). Informally, when you update a random variable given evidence (event) X, it means that you recalculate the probability distribution of that variable only based on the remaining elements of the probability space within event X. Since this can often be done using other probability distributions of various variables lying around, you don’t always see the probability space explicitly.
You’ll just have to construct a less convenient possible world where Omega has merely trillion cards and not an infinite amount of them, and answer the question about taking a trillion cards, which, if you accept the lottery all the way, leaves you with 2 to the trillionth power odds of dying. Find my reformulation of the topic problem here.
Many worlds have nothing to do with validity of suicidal decisions. If you have an answer that maximizes expected utility but gives almost-certain probability of total failure, you still take it in a deterministic world. There is no magic by which deterministic world declares that the decision-theoretic calculation is invalid in this particular case, while many-worlds lets it be.
I think you’re right. Would you agree that this is a problem with following the policy of maximizing expected utility? Or would you keep drawing cards?
This is a variant on the St. Petersburg paradox, innit? My preferred resolution is to assert that any realizable utility function is bounded.
Thanks for the link—this is another form of the same paradox orthnormal linked to, yes. The Wikipedia page proposes numerous “solutions”, but most of them just dodge the question by taking advantage of the fact that the paradox was posed using “ducats” instead of “utility”. It seems like the notion of “utility” was invented in response to this paradox. If you pose it again using the word “utility”, these “solutions” fail. The only possibly workable solution offered on that Wikipedia page is:
The page notes the reformulation in terms of utility, which it terms “super St. Petersberg paradox”. (It doesn’t have its own section, or I’d have linked directly to that.) I agree that there doesn’t seem to be a workable solution—my last refuge was just destroyed by Vladimir Nesov.
I’m afraid I don’t understand the difficulty here. Let’s assume that Omega can access any point in configuration space and make that the reality. Then either (A) at some point it runs out of things with which to entice you to draw another card, in which case your utility function is bounded or (B) it never runs out of such things, in which case your utility function in unbounded.
Why is this so paradoxical again?
If it’s not paradoxical, how many cards would you draw?
I guess no more than 10 cards. That’s based on not being able to imagine a scenario such that I’d prefer .999 probability of death + .001 probability of scenario to the status quo. But it’s just a guess because Omega might have better imagination that I do, or understand my utility function better than I do.
Omega offers you the healing of all the rest of Reality; every other sentient being will be preserved at what would otherwise be death and allowed to live and grow forever, and all unbearable suffering not already in your causal past will be prevented. You alone will die.
You wouldn’t take a trustworthy 0.001 probability of that reward and a 0.999 probability of death, over the status quo? I would go for it so fast that there’d be speed lines on my quarks.
Really, this whole debate is just about people being told “X utilons” and interpreting utility as having diminishing marginal utility—I don’t see any reason to suppose there’s more to it than that.
There’s no reason for Omega to kill me in the winning outcome...
Well, I’m not as altruistic as you are. But there must be some positive X such that even you wouldn’t take a trustworthy X probability of that reward and a 1-X probability of death, over the status quo, right? Suppose you’ve drawn enough cards to win this prize, what new prize can Omega offer you to entice you to draw another card?
Omega’s a bastard. So what?
WHAT? Are you honestly sure you’re THAT not as altruistic as I am?
There’s the problem of whether the scenario I described which involves a “forever” and “over all space” actually has infinite utility compared to increments in my own life which even if I would otherwise live forever would be over an infinitesimal fraction of all space, but if we fix that with a rather smaller prize that I would still accept, then yes of course.
Heal this Reality plus another three?
That’s fine, I just didn’t know if that detail had some implication that I was missing.
Yes, I’m pretty sure, although I leave open the possibility that I may encounter an argument in the future that would persuade me to change my mind. My understanding is that most people have preferences like mine, so I’m surprised that you’re so surprised.
It seems that I had missed the earlier posts on bounded vs. unbounded utility functions. I’ll follow up there to avoid retreading old ground.
I’m shocked, and I hadn’t thought that most people had preferences like yours—at least would not verbally express such preferences; their “real” preferences being a whole separate moral issue beyond that. I would have thought that it would be mainly psychopaths, the Rand-damaged, and a few unfortunate moral philosophers with mistaken metaethics, who would decline that offer.
I guess I would follow up with these questions: (1) When you see someone else hurting, or attend a friend’s funeral, do you feel sad; (2) are you more viscerally afraid of your own death than the strength of that emotion, if comparing two single cases; (3) do you decline to multiply out of a deliberate belief that all events after your own death ought to have zero utility to you, even if they feel sad when you think about them now; or (4) do you just generally want to leave the intuitive judgment (2) with its innate lack of multiplication undisturbed?
Or if I’m asking the wrong questions here, then what is going on? I would expect most humans to instinctively feel that their whole tribe, to say nothing of the entire rest of reality, was worth something; and I would expect a rationalist to understand that if their own life does not literally have lexicographic priority (i.e., lives of others have infinitesimal=0 value in the utility function) then the multiplication factor here is overwhelming; and I would also expect you, Wei Dai, to not mistakenly believe that you were rationally forced to be lexicographically selfish regardless of your feelings… so I’m really not clear on what could be going on here.
I guess my most important question would be: Do you feel that way, or are you deciding that way? In the former case, I might just need to make a movie showing one individual after another being healed, and after you’d seen enough of them, you would agree—the visceral emotional force having become great enough. In the latter case I’m not sure what’s going on.
PS again: Would you accept a 60% probability of death in exchange for healing the rest of reality?
1: Yes. 2: Yes. 3: No. 4: I see a number of reasons not to do straight multiplication:
Straight multiplication leads to an absurd degree of unconcern for oneself, given that the number of potential people is astronomical. It means, for example, that you can’t watch a movie for enjoyment, unless that somehow increases your productivity for saving the world. (In the least convenient world, watching movies uses up time without increasing productivity.)
No one has proposed a form of utilitarianism that is free from paradoxes (e.g., the Repugnant Conclusion).
My current position resembles the “Proximity argument” from Revisiting torture vs. dust specks:
This agrees with my intuitive judgment and also seems to have relatively few philosophical problems, compared to valuing everyone equally without any kind of discounting.
My last bullet above already answered this, but I’ll repeat for clarification: it’s both.
This should be clear from my answers above as well, but yes.
Oh, ’ello. Glad to see somebody still remembers the proximity argument. But it’s adapted to our world where you generally cannot kill a million distant people to make one close relative happy. If we move to a world where Omegas regularly ask people difficult questions, a lot of people adopting proximity reasoning will cause a huge tragedy of the commons.
About Eliezer’s question, I’d exchange my life for a reliable 0.001 chance of healing reality, because I can’t imagine living meaningfully after being offered such a wager and refusing it. Can’t imagine how I’d look other LW users in the eye, that’s for sure.
I publicly rejected the offer, and don’t feel like a pariah here. I wonder what is the actual degree of altruism among LW users. Should we set up a poll and gather some evidence?
Cooperation is a different consideration from preference. You can prefer only to keep your own “body” in certain dynamics, no matter what happens to the rest of the world, and still benefit the most from, roughly speaking, helping other agents. Which can include occasional self-sacrifice a la counterfactual mugging.
I’d be interested to know what you think of Critical-Level Utilitarianism and Population-Relative Betterness as ways of avoiding the repugnant conclusion and other problems.
So does your answer change once you’ve drawn 10 cards and are still alive?
No, if my guess is correct, then some time before I’m offered the 11th card, Omega will say “I can’t double your utility again” or equivalently, “There is no prize I can offer you such that you’d prefer a .5 probability of it to keeping what you have.”
After further thought, I see that case (B) can be quite paradoxical. Consider Eliezer’s utility function, which is supposedly unbounded as a function of how many years he lives. In other words, Omega can increase Eliezer’s utility without bound just by giving him increasingly longer lives. Expected utility maximization then dictates that he keeps drawing cards one after another, even though he knows that by doing so, with probability 1 he won’t live to enjoy his rewards.
When you go to infinity, you’d need to define additional mathematical structure that answers your question. You can’t just conclude that the correct course of action is to keep drawing cards for eternity, doing nothing else. Even if at each moment the right action is to draw one more card, when you consider the overall strategy, the strategy of drawing cards for all time may be a wrong strategy.
For example, consider the following preference on infinite strings. A string has utility 0, unless it has the form 11111.....11112222...., that is a finite number of 1 followed by infinite number of 2, in which case its utility is the number of 1s. Clearly, a string of this form with one more 1 has higher utility than a string without, and so a string with one more 1 should be preferred. But a string consisting only of 1s doesn’t have the non-zero-utility form, because it doesn’t have the tail of infinite number of 2s. It’s a fallacy to follow an incremental argument to infinity. Instead, one must follow a one-step argument that considers the infinite objects as whole.
See also Arntzenius, Elga, and Hawthorne: “Bayesianism, Infinite Decisions, and Binding”.
See also Arntzenius, Elga, and Hall: “Bayesianism, Infinite Decisions, and Binding”.
What you say sounds reasonable, but I’m not sure how I can apply it in this example. Can you elaborate?
Consider Eliezer’s choice of strategies at the beginning of the game. He can either stop after drawing n cards for some integer n, or draw an infinite number of cards. First, (supposing it takes 10 seconds to draw a card)
EU(draw an infinite number of cards) = 1⁄2 U(live 10 seconds) + 1⁄4 U(live 20 seconds) + 1⁄8 U(live 30 seconds) …
which obviously converges to a small number. On the other hand, EU(stop after n+1 cards) > EU(stop after n cards) for all n. So what should he do?
This exposes a hole in the problem statement: what does the Omega’s prize measure? We determined that U0 is the counterfactual where Omega kills you, U1 is the counterfactual where it does nothing, but what is U2=U1+3*(U1-U0)? This seems to be the expected utility of the event where you draw the lucky card, in which case this event contains, in particular, your future decisions to continue drawing cards. But if it’s so, it places a limit on how your utility can be improved further during the latter rounds, since if your utility continues to increase, it contradicts the statement in the first round that your utility is going to be only U2, and no more. Utility can’t change, as each utility is a valuation of a specific event in the sample space.
So, the alternative formulation that removes this contradiction is for Omega to only assert that the expected utility given that you receive a lucky card is no less than U2. In this case the right strategy seems to be continue drawing cards indefinitely, since the utility you receive could be in something other than your own life, now spent drawing cards only.
This however seems to sidestep the issue. What if the only utility you see is in the future actions you do, which don’t include picking cards, and you can’t interleave cards with other actions, that is you must allot a given amount of time to picking cards.
You can recast the problem of choosing each of the infinite number of decisions (or one among all available in some sense infinite sequences of decisions) to the problem of choosing a finite “seed” strategy for making decisions. Say, only a finite number of strategies is available, for example only what fits in the memory of the computer that starts the enterprise, that could since the start of the experiment be expanded, but the first version has a specified limit. In this case, the right program is as close to Busy Beaver is you can get, that is you draw cards as long as possible, but only finitely long, and after that you stop and go on to enjoy the actual life.
Why are you treating time as infinite? Surely it’s finite, just taking unbounded values?
But you’re not asked to decide a strategy for all of time. You can change your decision at every round freely.
You can’t change any fixed thing, you can only determine it. Change is a timeful concept. Change appears when you compare now and tomorrow, not when you compare the same thing with itself. You can’t change the past, and you can’t change the future. What you can change about the future is your plan for the future, or your knowledge: as the time goes on, your idea about a fact in the now becomes a different idea tomorrow.
When you “change” your strategy, what you are really doing is changing your mind about what you’re planning. The question you are trying to answer is what to actually do, what decisions to implement at each point. A strategy for all time is a generator of decisions at each given moment, an algorithm that runs and outputs a stream of decisions. If you know something about each particular decision, you can make a general statement about the whole stream. If you know that each next decision is going to be “accept” as opposed to “decline”, you can prove that the resulting stream is equivalent to an infinite stream that only answers “accept”, at all steps. And at the end, you have a process, the consequences of your decision-making algorithm consist in all of the decisions. You can’t change that consequence, as the consequence is what actually happens, if you changed your mind about making a particular decision along the way, the effect of that change is already factored in in the resulting stream of actions.
The consequentialist preference is going to compare the effect of the whole infinite stream of potential decisions, and until you know about the finiteness of the future, the state space is going to contain elements corresponding to the infinite decision traces. In this state space, there is an infinite stream corresponding to one deciding to continue picking cards for eternity.
Thanks, I understand now.
Whoa.
Is there something I can take that would help me understand that better?
I’m more or less talking just about infinite streams, which is a well-known structure in math. You can try looking at the following references. Or find something else.
P. Cousot & R. Cousot (1992). `Inductive definitions, semantics and abstract interpretations’. In POPL ’92: Proceedings of the 19th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, pp. 83-94, New York, NY, USA. ACM. http://www.di.ens.fr/~cousot/COUSOTpapers/POPL92.shtml
J. J. M. M. Rutten (2003). `Behavioural differential equations: a coinductive calculus of streams, automata, and power series’. Theor. Comput. Sci. 308(1-3):1-53. http://www.cwi.nl/~janr/papers/files-of-papers/tcs308.pdf
Does Omega’s utility doubling cover the contents of the as-yet-untouched deck? It seems to me that it’d be pretty spiffy re: my utility function for the deck to have a reduced chance of killing me.
At first I thought this was pretty funny, but even if you were joking, it may actually map to the extinction problem, since each new technology has a chance of making extinction less likely, as well. As an example, nuclear technology had some probability of killing everyone, but also some probability of making Orion ships possible, allowing diaspora.
While I’m gaming the system, my lifetime utility function (if I have one) could probably be doubled by giving me a reasonable suite of superpowers, some of which would let me identify the rest of the cards in the deck (X-ray vision, precog powers, etc.) or be protected from whatever mechanism the skull cards use to kill me (immunity to electricity or just straight-up invulnerability). Is it a stipulation of the scenario that nothing Omega does to tweak the utility function upon drawing a star affects the risks of drawing from the deck, directly or indirectly?
It should be, especially since the existential-risk problems that we’re trying to model aren’t known to come with superpowers or other such escape hatches.
Yeesh. I’m changing my mind again tonight. My only excuse is that I’m sick, so I’m not thinking as straight as I might.
I was originally thinking that Vladimir Nesov’s reformulation showed that I would always accept Omega’s wager. But now I see that at some point U1+3*(U1-U0) must exceed any upper bound (assuming I survive that long).
Given U1 (utility of refusing initial wager), U0 (utility of death), U_max, and U_n (utility of refusing wager n assuming you survive that long), it might be possible that there is a sequence of wagers that (i) offer positive expected utility at each step; (ii) asymptotically approach the upper bound if you survive; and (iii) have a probability of survival approaching zero. I confess I’m in no state to cope with the math necessary to give such a sequence or disprove its existence.
There is no such sequence. Proof:
In order for wager n to be nonnegative expected utility, P(death)*U_0 + (1-P(death))*U_(n+1) >= U_n. Equivalently, P(death this time | survived until n) ⇐ (U_(n+1)-U_n) / (U_(n+1)-U0).
Assume the worst case, equality. Then the cumulative probability of survival decreases by exactly the same factor as your utility (conditioned on survival) increases. This is simple multiplication, so it’s true of a sequence of borderline wagers too.
With a bounded utility function, the worst sequence of wagers you’ll accept in total is P(death) ⇐ (U_max-U0)/(U1-U0). Which is exactly what you’d expect.
When there’s an infinite number of wagers, there can be a distinction between accepting the whole sequence at one go and accepting each wager one after another. (There’s a paradox associated with this distinction, but I forget what it’s called.) Your second-last sentence seems to be a conclusion about accepting the whole sequence at one go, but I’m worried about accepting each wager one after another. Is the distinction important here?
Are you thinking of the Riemann series theorem? That doesn’t apply when the payoff matrix for each bet is the same (and finite).
No, it was this thing. I just couldn’t articulate it.
A bounded utility function probably gets you out of all problems along those lines.
Certainly it’s good in the particular case: your expected utility (in the appropriate sense) is an increasing function of bets you accept and increasing sequences don’t have convergence issues.
How would you bound your utility function? Just pick some arbitrary converging function f, and set utility’ = f(utility)? That seems arbitrary. I suspect it might also make theorems about expectation maximization break down.
No, I’m not advocating changing utility functions. I’m just saying that if your utility function is bounded, you don’t have either of these problems with infinity. You don’t have the convergence problem nor the original problem of probability of the good outcome going to zero. Of course, you still have the result that you keep making bets till your utility is maxed out with very low probability, which bothers some people.
How would it help if this sequence existed?
If the sequence exists, then the paradox* persists even in the face of bounded utility functions. (Or possibly it already persists, as Vladimir Nesov argued and you agreed, but my cold-virus-addled wits aren’t sharp enough to see it.)
* The paradox is that each wager has positive expected utility, but accepting all wagers leads to death almost surely.
Ah. So you don’t want the sequence to exist.
In the sense that if it exists, then it’s a bullet I will bite.
Why is rejection of mathematical expectation an unworkable solution?
This isn’t the only scenario where straight expectation is problematic. Pascal’s Mugging, timeless decision theory, and maximization of expected growth rate come to mind. That makes four.
In my opinion, LWers should not give expected utility maximization the same axiomatic status that they award consequentialism. Is this worth a top level post?
This is exactly my take on it also.
There is a model which is standard in economics which say “people maximize expected utility; risk averseness arises because utility functions are concave”. This has always struck me as extremely fishy, for two reasons: (a) it gives rise to paradoxes like this, and (b) it doesn’t at all match what making a choice feels like for me: if someone offers me a risky bet, I feel inclined to reject it because it is risky, not because I have done some extensive integration of my utility function over all possible outcomes. So it seems a much safer assumption to just assume that people’s preferences are a function from probability distributions of outcomes, rather than making the more restrictive assumption that that function has to arise as an integral over utilities of individual outcomes.
So why is the “expected utility” model so popular? A couple of months ago I came across a blog-post which provides one clue: it pointed out that standard zero-sum game theory works when players maximize expected utility, but does not work if they have preferences about probability distributions of outcomes (since then introducing mixed strategies won’t work).
So an economist who wants to apply game theory will be inclined to assume that actors are maximizing expected utility; but we LWers shouldn’t necessarily.
Do you mean concave?
Technically speaking, isn’t maximizing expected utility a special case of having preferences about probability distributions about outcomes? So maybe you should instead say “does not work elegantly if they have arbitrary preferences about probability distributions.”
This is what I tend to do when I’m having conversations in real life; let’s see how it works online :-)
Yes, thanks. I’ve fixed it.
What does it mean, technically, to have a preference “about” probability distributions?
I think I and John Maxwell IV mean the same thing, but here is the way I would phrase it. Suppose someone is offering me the pick a ticket for one of a range of different lotteries. Each lottery offers the same set of prizes, but depending on which lottery I participate in, the probability of winning them is different.
I am an agent, and we assume I have a preference order on the lotteries—e.g. which ticket I want the most, which ticket I want the least, and which tickets I am indifferent between. The action that will be rational for me to take depends on which ticket I want.
I am saying that a general theory of rational action should deal with arbitrary preference orders for the tickets. The more standard theory restricts attention to preference orders that arise from first assigning a utility value to each prize and then computing the expected utility for each ticket.
Let’s define an “experiment” as something that randomly changes an agent’s utility based on some probability density function. An agent’s “desire” for a given experiment is the amount of utility Y such that the agent is indifferent between the experiment occurring and having their utility changed by Y.
From Pfft we see that economists assume that for any given agent and any given experiment, the agent’s desire for the experiment is equal to
dx), where x is an amount of utility and f(x) gives the probability that the experiment’s outcome will be changing the agent’s utility by x. In other words, economists assume that agents desire experiments according to their expectation, which is not necessarily a good assumption.Hmm… I hope you interpret your own words so that what you write comes out correct, your language is imprecise and at first I didn’t see a way to read what you wrote that made sense.
When I reread your comment to which I asked my question with this new perspective, the question disappeared. By “preference about probability distributions” you simply mean preference over events, that doesn’t necessarily satisfy expected utility axioms.
ETA: Note that in this case, there isn’t necessarily a way of assigning (subjective) probabilities, as subjective probabilities follow from preferences, but only if the preferences are of the right form. Thus, saying that those not-expected-utility preferences are over probability distributions is more conceptually problematic than saying that they are over events. If you don’t use probabilities in the decision algorithm, probabilities don’t mean anything.
I am eager to improve. Please give specific suggestions.
Right.
Hm? I thought subjective probabilities followed from prior probabilities and observed evidence and stuff. What do preferences have to do with them?
Are you using my technical definition of event or the standard definition?
Probably I should not have redefined “event”; I now see that my use is nonstandard. Hopefully I can clarify things. Let’s say I am going to roll a die and give you a number of dollars equal to the number of spots on the face left pointing upward. According to my (poorly chosen) use of the word “event”, the process of rolling the die is an “event”. According to what I suspect the standard definition is, the die landing with 4 spots face up would be an “event”. To clear things up, I suggest that we refer to the rolling of the die as an “experiment”, and 4 spots landing face up as an “outcome”. I’m going to rewrite my comment with this new terminology. I’m also replacing “value” with “desire”, for what it’s worth.
The way I want to evaluate the desirability of an experiment is more complicated than simply computing its expected value. But I still use probabilities. I would not give Pascal’s mugger any money. I would think very carefully about an experiment that had a 99% probability of getting me killed and a 1% probability of generating 101 times as much utility as I expect to generate in my lifetime, whereas a perfect expected utility maximizer would take this deal in an instant. Etc.
Roughly speaking, event is a set of alternative possibilities. So, the whole roll of a die is an event (set of all possible outcomes of a roll), as well as specific outcomes (sets that contain a single outcome). See probability space for a more detailed definition.
One way of defining prior and utility is just by first taking a preference over the events of sample space, and then choosing any pair prior+utility such that expected utility calculated from them induces the same order on events. Of course, the original order on events has to be “nice” in some sense for it to be possible to find prior+utility that have this property.
Any observations and updating consist in choosing what events you work with. Once prior is fixed, it never changes.
(Of course, you should read up on the subject in greater detail than I hint at.)
Um, isn’t that obviously wrong? It sounds like your are suggesting that we say “I like playing blackjack better than playing the lottery, so I should choose a prior probability of winning each and a utility associated with winning each so that that preference will remain consistent when I switch from ‘preference mode’ to ‘utilitarian mode’.” Wouldn’t it be better to choose the utilities of winning based on the prizes they give? And choose the priors for each based on studying the history of each game carefully?
Events are sets of outcomes, right? It sounds like you are suggesting that people update their probabilities by reshuffling which outcomes go with which events. Aren’t events just a layer of formality over outcomes? Isn’t real learning what happens when you change your estimations of the probabilities of outcomes, not when you reclassify them?
It almost seems to me as if we are talking past each other… I think I need a better background on this stuff. Can you recommend any books that explain probability for the layman? I already read a large section of one, but apparently it wasn’t very good...
Although I do think there is a chance you are wrong. I see you mixing up outcome-desirability estimates with chance-of-outcome estimates, which seems obviously bad.
If you don’t want the choice of preference to turn out bad for you, choose good preference ;-) There is no freedom in choosing your preference, as the “choice” is itself a decision-concept, defined in terms of preference, and can’t be a party to the definition of preference. When you are speaking of a particular choice of preference being bad or foolish, you are judging this choice from the reference frame of some other preference, while with preference as foundation of decision-making, you can’t go through this step. It really is that arbitrary. See also: Priors as Mathematical Objects, Probability is Subjectively Objective.
You are confusing probability space and its prior (the fundamental structure that bind the rest together) with random variables and their probability distributions (things that are based on probability space and that “interact” with each other through the definition in terms of the common probability space, restricted to common events). Informally, when you update a random variable given evidence (event) X, it means that you recalculate the probability distribution of that variable only based on the remaining elements of the probability space within event X. Since this can often be done using other probability distributions of various variables lying around, you don’t always see the probability space explicitly.
Well, rejection’s not a solution per se until you pick something justifiable to replace it with.
I’d be interested in a top-level post on the subject.
If this condition makes a difference to you, your answer must also be to take as many cards as Omega has to offer.
I don’t follow.
(My assertion implies that Omega cannot double my utility indefinitely, so it’s inconsistent with the problem as given.)
You’ll just have to construct a less convenient possible world where Omega has merely trillion cards and not an infinite amount of them, and answer the question about taking a trillion cards, which, if you accept the lottery all the way, leaves you with 2 to the trillionth power odds of dying. Find my reformulation of the topic problem here.
Agreed.
Gotcha. Nice reformulation.