Even if I’m only considering my own values, I give some intrinsic weight to what other people care about. (“NU” is just an approximation of my intrinsic values.) So I’d still accept the papercut.
I also don’t really care about mild suffering—mostly just torture-level suffering. If it were 7 billion really happy people plus 1 person tortured, that would be a much harder dilemma.
In practice, the ratio of expected heaven to expected hell in the future is much smaller than 7 billion to 1, so even if someone is just a “negative-leaning utilitarian” who cares orders of magnitude more about suffering than happiness, s/he’ll tend to act like a pure NU on any actual policy question.
The second option is a world with seven billion −1 really happy people and one person who is a tiny bit less than mildly happy?
My reason to choose the former would be that all of those lives are experienced by only one person and everyone experiences only one life. In the former case, no subjective experience is worse than mildly happy. In the latter case, a subjective experience is worse than that. It doesn’t matter how much happiness or pain a number of people will cumulatively experience because no one actually experiences the cumulative experience. All that matters is improving the worst life at any given moment.
I won’t be surprised if my reasoning is bullshit, but I’m not seeing it.
The problem I see here is that if you literally care only about the “worst life at any given moment”, then situations “seven billion extremely happy people, one mildly unhappy person” and “seven billion mildly hapy people, one mildly unhappy person” are equivalent, because the worst one is in the same situation. Which means, if you had a magical button that could convert the latter situation to the former, you wouldn’t bother pressing it, because you wouldn’t see a point in doing so. Is that what you really believe?
I care about wellbeing, but only second to pain. I’d definitely press a button maximizing happiness if it didn’t cause individual unhappiness worse than it cured. Doesn’t that make sense?
On second thought, two equally happy people > one and likewise with unhappiness. Maybe it doesn’t make sense after all. Or it’s a mix of a moral guideline (NU) and personal preference?
Okay. I’m sure you’ve seen this question before, but I’m going to ask it anyway.
Given a choice between
A world with seven billion mildly happy people, or
A world with seven billion minus one really happy people, and one person who just got a papercut
Are you really going to choose the former? What’s your reasoning?
From a practical perspective, accepting the papercut is the obvious choice because it’s good to be nice to other value systems.
Even if I’m only considering my own values, I give some intrinsic weight to what other people care about. (“NU” is just an approximation of my intrinsic values.) So I’d still accept the papercut.
I also don’t really care about mild suffering—mostly just torture-level suffering. If it were 7 billion really happy people plus 1 person tortured, that would be a much harder dilemma.
In practice, the ratio of expected heaven to expected hell in the future is much smaller than 7 billion to 1, so even if someone is just a “negative-leaning utilitarian” who cares orders of magnitude more about suffering than happiness, s/he’ll tend to act like a pure NU on any actual policy question.
The second option is a world with seven billion −1 really happy people and one person who is a tiny bit less than mildly happy?
My reason to choose the former would be that all of those lives are experienced by only one person and everyone experiences only one life. In the former case, no subjective experience is worse than mildly happy. In the latter case, a subjective experience is worse than that. It doesn’t matter how much happiness or pain a number of people will cumulatively experience because no one actually experiences the cumulative experience. All that matters is improving the worst life at any given moment.
I won’t be surprised if my reasoning is bullshit, but I’m not seeing it.
The problem I see here is that if you literally care only about the “worst life at any given moment”, then situations “seven billion extremely happy people, one mildly unhappy person” and “seven billion mildly hapy people, one mildly unhappy person” are equivalent, because the worst one is in the same situation. Which means, if you had a magical button that could convert the latter situation to the former, you wouldn’t bother pressing it, because you wouldn’t see a point in doing so. Is that what you really believe?
I care about wellbeing, but only second to pain. I’d definitely press a button maximizing happiness if it didn’t cause individual unhappiness worse than it cured. Doesn’t that make sense?
On second thought, two equally happy people > one and likewise with unhappiness. Maybe it doesn’t make sense after all. Or it’s a mix of a moral guideline (NU) and personal preference?
Good point. Also, in most multiverse theories, the worst possible experience necessarily exists somewhere.
And this is why destroying everything in existence doesn’t seem obviously evil (not that I’d act on it...)
That would also be futile, because somewhere in the multiverse your plans to destroy everything would fail.