I guess no more than 10 cards. That’s based on not being able to imagine a scenario such that I’d prefer .999 probability of death + .001 probability of scenario to the status quo. But it’s just a guess because Omega might have better imagination that I do, or understand my utility function better than I do.
Omega offers you the healing of all the rest of Reality; every other sentient being will be preserved at what would otherwise be death and allowed to live and grow forever, and all unbearable suffering not already in your causal past will be prevented. You alone will die.
You wouldn’t take a trustworthy 0.001 probability of that reward and a 0.999 probability of death, over the status quo? I would go for it so fast that there’d be speed lines on my quarks.
Really, this whole debate is just about people being told “X utilons” and interpreting utility as having diminishing marginal utility—I don’t see any reason to suppose there’s more to it than that.
There’s no reason for Omega to kill me in the winning outcome...
You wouldn’t take a trustworthy 0.001 probability of that reward and a 0.999 probability of death, over the status quo?
Well, I’m not as altruistic as you are. But there must be some positive X such that even you wouldn’t take a trustworthy X probability of that reward and a 1-X probability of death, over the status quo, right? Suppose you’ve drawn enough cards to win this prize, what new prize can Omega offer you to entice you to draw another card?
There’s no reason for Omega to kill me in the winning outcome...
Omega’s a bastard. So what?
Well, I’m not as altruistic as you are.
WHAT? Are you honestly sure you’re THAT not as altruistic as I am?
But there must be some positive X such that even you wouldn’t take a trustworthy X probability of that reward and a 1-X probability of death, over the status quo, right?
There’s the problem of whether the scenario I described which involves a “forever” and “over all space” actually has infinite utility compared to increments in my own life which even if I would otherwise live forever would be over an infinitesimal fraction of all space, but if we fix that with a rather smaller prize that I would still accept, then yes of course.
Suppose you’ve drawn enough cards to win this prize, what new prize can Omega offer you to entice you to draw another card?
That’s fine, I just didn’t know if that detail had some implication that I was missing.
WHAT? Are you honestly sure you’re THAT not as altruistic as I am?
Yes, I’m pretty sure, although I leave open the possibility that I may encounter an argument in the future that would persuade me to change my mind. My understanding is that most people have preferences like mine, so I’m surprised that you’re so surprised.
It seems that I had missed the earlierposts on bounded vs. unbounded utility functions. I’ll follow up there to avoid retreading old ground.
Yes, I’m pretty sure, although I leave open the possibility that I may encounter an argument in the future that would persuade me to change my mind. My understanding is that most people have preferences like mine, so I’m surprised that you’re so surprised.
I’m shocked, and I hadn’t thought that most people had preferences like yours—at least would not verbally express such preferences; their “real” preferences being a whole separate moral issue beyond that. I would have thought that it would be mainly psychopaths, the Rand-damaged, and a few unfortunate moral philosophers with mistaken metaethics, who would decline that offer.
I guess I would follow up with these questions: (1) When you see someone else hurting, or attend a friend’s funeral, do you feel sad; (2) are you more viscerally afraid of your own death than the strength of that emotion, if comparing two single cases; (3) do you decline to multiply out of a deliberate belief that all events after your own death ought to have zero utility to you, even if they feel sad when you think about them now; or (4) do you just generally want to leave the intuitive judgment (2) with its innate lack of multiplication undisturbed?
Or if I’m asking the wrong questions here, then what is going on? I would expect most humans to instinctively feel that their whole tribe, to say nothing of the entire rest of reality, was worth something; and I would expect a rationalist to understand that if their own life does not literally have lexicographic priority (i.e., lives of others have infinitesimal=0 value in the utility function) then the multiplication factor here is overwhelming; and I would also expect you, Wei Dai, to not mistakenly believe that you were rationally forced to be lexicographically selfish regardless of your feelings… so I’m really not clear on what could be going on here.
I guess my most important question would be: Do you feel that way, or are you deciding that way? In the former case, I might just need to make a movie showing one individual after another being healed, and after you’d seen enough of them, you would agree—the visceral emotional force having become great enough. In the latter case I’m not sure what’s going on.
PS again: Would you accept a 60% probability of death in exchange for healing the rest of reality?
I guess I would follow up with these questions: (1) When you see someone else hurting, or attend a friend’s funeral, do you feel sad; (2) are you more viscerally afraid of your own death than the strength of that emotion, if comparing two single cases; (3) do you decline to multiply out of a deliberate belief that all events after your own death ought to have zero utility to you, even if they feel sad when you think about them now; or (4) do you just generally want to leave the intuitive judgment (2) with its innate lack of multiplication undisturbed?
1: Yes. 2: Yes. 3: No. 4: I see a number of reasons not to do straight multiplication:
Straight multiplication leads to an absurd degree of unconcern for oneself, given that the number of potential people is astronomical. It means, for example, that you can’t watch a movie for enjoyment, unless that somehow increases your productivity for saving the world. (In the least convenient world, watching movies uses up time without increasing productivity.)
No one has proposed a form of utilitarianism that is free from paradoxes (e.g., the Repugnant Conclusion).
Proximity argument: don’t ask me to value strangers equally to friends and relatives. If each additional person matters 1% less than the previous one, then even an infinite number of people getting dust specks in their eyes adds up to a finite and not especially large amount of suffering.
This agrees with my intuitive judgment and also seems to have relatively few philosophical problems, compared to valuing everyone equally without any kind of discounting.
I guess my most important question would be: Do you feel that way, or are you deciding that way?
My last bullet above already answered this, but I’ll repeat for clarification: it’s both.
PS again: Would you accept a 60% probability of death in exchange for healing the rest of reality?
This should be clear from my answers above as well, but yes.
Oh, ’ello. Glad to see somebody still remembers the proximity argument. But it’s adapted to our world where you generally cannot kill a million distant people to make one close relative happy. If we move to a world where Omegas regularly ask people difficult questions, a lot of people adopting proximity reasoning will cause a huge tragedy of the commons.
About Eliezer’s question, I’d exchange my life for a reliable 0.001 chance of healing reality, because I can’t imagine living meaningfully after being offered such a wager and refusing it. Can’t imagine how I’d look other LW users in the eye, that’s for sure.
Can’t imagine how I’d look other LW users in the eye, that’s for sure.
I publicly rejected the offer, and don’t feel like a pariah here. I wonder what is the actual degree of altruism among LW users. Should we set up a poll and gather some evidence?
Cooperation is a different consideration from preference. You can prefer only to keep your own “body” in certain dynamics, no matter what happens to the rest of the world, and still benefit the most from, roughly speaking, helping other agents. Which can include occasional self-sacrifice a la counterfactual mugging.
No, if my guess is correct, then some time before I’m offered the 11th card, Omega will say “I can’t double your utility again” or equivalently, “There is no prize I can offer you such that you’d prefer a .5 probability of it to keeping what you have.”
I guess no more than 10 cards. That’s based on not being able to imagine a scenario such that I’d prefer .999 probability of death + .001 probability of scenario to the status quo. But it’s just a guess because Omega might have better imagination that I do, or understand my utility function better than I do.
Omega offers you the healing of all the rest of Reality; every other sentient being will be preserved at what would otherwise be death and allowed to live and grow forever, and all unbearable suffering not already in your causal past will be prevented. You alone will die.
You wouldn’t take a trustworthy 0.001 probability of that reward and a 0.999 probability of death, over the status quo? I would go for it so fast that there’d be speed lines on my quarks.
Really, this whole debate is just about people being told “X utilons” and interpreting utility as having diminishing marginal utility—I don’t see any reason to suppose there’s more to it than that.
There’s no reason for Omega to kill me in the winning outcome...
Well, I’m not as altruistic as you are. But there must be some positive X such that even you wouldn’t take a trustworthy X probability of that reward and a 1-X probability of death, over the status quo, right? Suppose you’ve drawn enough cards to win this prize, what new prize can Omega offer you to entice you to draw another card?
Omega’s a bastard. So what?
WHAT? Are you honestly sure you’re THAT not as altruistic as I am?
There’s the problem of whether the scenario I described which involves a “forever” and “over all space” actually has infinite utility compared to increments in my own life which even if I would otherwise live forever would be over an infinitesimal fraction of all space, but if we fix that with a rather smaller prize that I would still accept, then yes of course.
Heal this Reality plus another three?
That’s fine, I just didn’t know if that detail had some implication that I was missing.
Yes, I’m pretty sure, although I leave open the possibility that I may encounter an argument in the future that would persuade me to change my mind. My understanding is that most people have preferences like mine, so I’m surprised that you’re so surprised.
It seems that I had missed the earlier posts on bounded vs. unbounded utility functions. I’ll follow up there to avoid retreading old ground.
I’m shocked, and I hadn’t thought that most people had preferences like yours—at least would not verbally express such preferences; their “real” preferences being a whole separate moral issue beyond that. I would have thought that it would be mainly psychopaths, the Rand-damaged, and a few unfortunate moral philosophers with mistaken metaethics, who would decline that offer.
I guess I would follow up with these questions: (1) When you see someone else hurting, or attend a friend’s funeral, do you feel sad; (2) are you more viscerally afraid of your own death than the strength of that emotion, if comparing two single cases; (3) do you decline to multiply out of a deliberate belief that all events after your own death ought to have zero utility to you, even if they feel sad when you think about them now; or (4) do you just generally want to leave the intuitive judgment (2) with its innate lack of multiplication undisturbed?
Or if I’m asking the wrong questions here, then what is going on? I would expect most humans to instinctively feel that their whole tribe, to say nothing of the entire rest of reality, was worth something; and I would expect a rationalist to understand that if their own life does not literally have lexicographic priority (i.e., lives of others have infinitesimal=0 value in the utility function) then the multiplication factor here is overwhelming; and I would also expect you, Wei Dai, to not mistakenly believe that you were rationally forced to be lexicographically selfish regardless of your feelings… so I’m really not clear on what could be going on here.
I guess my most important question would be: Do you feel that way, or are you deciding that way? In the former case, I might just need to make a movie showing one individual after another being healed, and after you’d seen enough of them, you would agree—the visceral emotional force having become great enough. In the latter case I’m not sure what’s going on.
PS again: Would you accept a 60% probability of death in exchange for healing the rest of reality?
1: Yes. 2: Yes. 3: No. 4: I see a number of reasons not to do straight multiplication:
Straight multiplication leads to an absurd degree of unconcern for oneself, given that the number of potential people is astronomical. It means, for example, that you can’t watch a movie for enjoyment, unless that somehow increases your productivity for saving the world. (In the least convenient world, watching movies uses up time without increasing productivity.)
No one has proposed a form of utilitarianism that is free from paradoxes (e.g., the Repugnant Conclusion).
My current position resembles the “Proximity argument” from Revisiting torture vs. dust specks:
This agrees with my intuitive judgment and also seems to have relatively few philosophical problems, compared to valuing everyone equally without any kind of discounting.
My last bullet above already answered this, but I’ll repeat for clarification: it’s both.
This should be clear from my answers above as well, but yes.
Oh, ’ello. Glad to see somebody still remembers the proximity argument. But it’s adapted to our world where you generally cannot kill a million distant people to make one close relative happy. If we move to a world where Omegas regularly ask people difficult questions, a lot of people adopting proximity reasoning will cause a huge tragedy of the commons.
About Eliezer’s question, I’d exchange my life for a reliable 0.001 chance of healing reality, because I can’t imagine living meaningfully after being offered such a wager and refusing it. Can’t imagine how I’d look other LW users in the eye, that’s for sure.
I publicly rejected the offer, and don’t feel like a pariah here. I wonder what is the actual degree of altruism among LW users. Should we set up a poll and gather some evidence?
Cooperation is a different consideration from preference. You can prefer only to keep your own “body” in certain dynamics, no matter what happens to the rest of the world, and still benefit the most from, roughly speaking, helping other agents. Which can include occasional self-sacrifice a la counterfactual mugging.
I’d be interested to know what you think of Critical-Level Utilitarianism and Population-Relative Betterness as ways of avoiding the repugnant conclusion and other problems.
So does your answer change once you’ve drawn 10 cards and are still alive?
No, if my guess is correct, then some time before I’m offered the 11th card, Omega will say “I can’t double your utility again” or equivalently, “There is no prize I can offer you such that you’d prefer a .5 probability of it to keeping what you have.”