The second option is a world with seven billion −1 really happy people and one person who is a tiny bit less than mildly happy?
My reason to choose the former would be that all of those lives are experienced by only one person and everyone experiences only one life. In the former case, no subjective experience is worse than mildly happy. In the latter case, a subjective experience is worse than that. It doesn’t matter how much happiness or pain a number of people will cumulatively experience because no one actually experiences the cumulative experience. All that matters is improving the worst life at any given moment.
I won’t be surprised if my reasoning is bullshit, but I’m not seeing it.
The problem I see here is that if you literally care only about the “worst life at any given moment”, then situations “seven billion extremely happy people, one mildly unhappy person” and “seven billion mildly hapy people, one mildly unhappy person” are equivalent, because the worst one is in the same situation. Which means, if you had a magical button that could convert the latter situation to the former, you wouldn’t bother pressing it, because you wouldn’t see a point in doing so. Is that what you really believe?
I care about wellbeing, but only second to pain. I’d definitely press a button maximizing happiness if it didn’t cause individual unhappiness worse than it cured. Doesn’t that make sense?
On second thought, two equally happy people > one and likewise with unhappiness. Maybe it doesn’t make sense after all. Or it’s a mix of a moral guideline (NU) and personal preference?
The second option is a world with seven billion −1 really happy people and one person who is a tiny bit less than mildly happy?
My reason to choose the former would be that all of those lives are experienced by only one person and everyone experiences only one life. In the former case, no subjective experience is worse than mildly happy. In the latter case, a subjective experience is worse than that. It doesn’t matter how much happiness or pain a number of people will cumulatively experience because no one actually experiences the cumulative experience. All that matters is improving the worst life at any given moment.
I won’t be surprised if my reasoning is bullshit, but I’m not seeing it.
The problem I see here is that if you literally care only about the “worst life at any given moment”, then situations “seven billion extremely happy people, one mildly unhappy person” and “seven billion mildly hapy people, one mildly unhappy person” are equivalent, because the worst one is in the same situation. Which means, if you had a magical button that could convert the latter situation to the former, you wouldn’t bother pressing it, because you wouldn’t see a point in doing so. Is that what you really believe?
I care about wellbeing, but only second to pain. I’d definitely press a button maximizing happiness if it didn’t cause individual unhappiness worse than it cured. Doesn’t that make sense?
On second thought, two equally happy people > one and likewise with unhappiness. Maybe it doesn’t make sense after all. Or it’s a mix of a moral guideline (NU) and personal preference?
Good point. Also, in most multiverse theories, the worst possible experience necessarily exists somewhere.
And this is why destroying everything in existence doesn’t seem obviously evil (not that I’d act on it...)
That would also be futile, because somewhere in the multiverse your plans to destroy everything would fail.