I guess I would follow up with these questions: (1) When you see someone else hurting, or attend a friend’s funeral, do you feel sad; (2) are you more viscerally afraid of your own death than the strength of that emotion, if comparing two single cases; (3) do you decline to multiply out of a deliberate belief that all events after your own death ought to have zero utility to you, even if they feel sad when you think about them now; or (4) do you just generally want to leave the intuitive judgment (2) with its innate lack of multiplication undisturbed?
1: Yes. 2: Yes. 3: No. 4: I see a number of reasons not to do straight multiplication:
Straight multiplication leads to an absurd degree of unconcern for oneself, given that the number of potential people is astronomical. It means, for example, that you can’t watch a movie for enjoyment, unless that somehow increases your productivity for saving the world. (In the least convenient world, watching movies uses up time without increasing productivity.)
No one has proposed a form of utilitarianism that is free from paradoxes (e.g., the Repugnant Conclusion).
Proximity argument: don’t ask me to value strangers equally to friends and relatives. If each additional person matters 1% less than the previous one, then even an infinite number of people getting dust specks in their eyes adds up to a finite and not especially large amount of suffering.
This agrees with my intuitive judgment and also seems to have relatively few philosophical problems, compared to valuing everyone equally without any kind of discounting.
I guess my most important question would be: Do you feel that way, or are you deciding that way?
My last bullet above already answered this, but I’ll repeat for clarification: it’s both.
PS again: Would you accept a 60% probability of death in exchange for healing the rest of reality?
This should be clear from my answers above as well, but yes.
Oh, ’ello. Glad to see somebody still remembers the proximity argument. But it’s adapted to our world where you generally cannot kill a million distant people to make one close relative happy. If we move to a world where Omegas regularly ask people difficult questions, a lot of people adopting proximity reasoning will cause a huge tragedy of the commons.
About Eliezer’s question, I’d exchange my life for a reliable 0.001 chance of healing reality, because I can’t imagine living meaningfully after being offered such a wager and refusing it. Can’t imagine how I’d look other LW users in the eye, that’s for sure.
Can’t imagine how I’d look other LW users in the eye, that’s for sure.
I publicly rejected the offer, and don’t feel like a pariah here. I wonder what is the actual degree of altruism among LW users. Should we set up a poll and gather some evidence?
Cooperation is a different consideration from preference. You can prefer only to keep your own “body” in certain dynamics, no matter what happens to the rest of the world, and still benefit the most from, roughly speaking, helping other agents. Which can include occasional self-sacrifice a la counterfactual mugging.
1: Yes. 2: Yes. 3: No. 4: I see a number of reasons not to do straight multiplication:
Straight multiplication leads to an absurd degree of unconcern for oneself, given that the number of potential people is astronomical. It means, for example, that you can’t watch a movie for enjoyment, unless that somehow increases your productivity for saving the world. (In the least convenient world, watching movies uses up time without increasing productivity.)
No one has proposed a form of utilitarianism that is free from paradoxes (e.g., the Repugnant Conclusion).
My current position resembles the “Proximity argument” from Revisiting torture vs. dust specks:
This agrees with my intuitive judgment and also seems to have relatively few philosophical problems, compared to valuing everyone equally without any kind of discounting.
My last bullet above already answered this, but I’ll repeat for clarification: it’s both.
This should be clear from my answers above as well, but yes.
Oh, ’ello. Glad to see somebody still remembers the proximity argument. But it’s adapted to our world where you generally cannot kill a million distant people to make one close relative happy. If we move to a world where Omegas regularly ask people difficult questions, a lot of people adopting proximity reasoning will cause a huge tragedy of the commons.
About Eliezer’s question, I’d exchange my life for a reliable 0.001 chance of healing reality, because I can’t imagine living meaningfully after being offered such a wager and refusing it. Can’t imagine how I’d look other LW users in the eye, that’s for sure.
I publicly rejected the offer, and don’t feel like a pariah here. I wonder what is the actual degree of altruism among LW users. Should we set up a poll and gather some evidence?
Cooperation is a different consideration from preference. You can prefer only to keep your own “body” in certain dynamics, no matter what happens to the rest of the world, and still benefit the most from, roughly speaking, helping other agents. Which can include occasional self-sacrifice a la counterfactual mugging.
I’d be interested to know what you think of Critical-Level Utilitarianism and Population-Relative Betterness as ways of avoiding the repugnant conclusion and other problems.