I would predict, based on human nature, that a if the 3^^^3 people were asked if they wanted to inflict a dust speck in each one of their eyes, in exchange for not torturing another individual for 50 years, they would probably vote for dust specks.
I think you’ve nailed my problem with this scenario: anyone who wouldn’t go for this, I would be disinclined to listen to.
Perhaps this is just silliness, but I am curious how you would feel if the question were:
“You have a choice: Either one person gets to experience pure, absolute joy for 50 years, or 3^^^3 people get to experience a moment of pleasure on the level experienced when eating a popsicle.”
Personally, I experience more or less the same internal struggle with this question as with the other: I endorse the idea that what matters is total utility, but my moral intuitions aren’t entirely aligned with that idea, so I keep wanting to choose the individual benefit (joy or non-torture) despite being unable to justify choosing it.
Also, as David Gerard says, it’s a different function… that is, you can’t derive an answer to one question from an answer to the other… but the numbers we’re tossing around are so huge that the difference hardly matters.
I think you’ve nailed my problem with this scenario: anyone who wouldn’t go for this, I would be disinclined to listen to.
Perhaps this is just silliness, but I am curious how you would feel if the question were:
“You have a choice: Either one person gets to experience pure, absolute joy for 50 years, or 3^^^3 people get to experience a moment of pleasure on the level experienced when eating a popsicle.”
Do you choose popsicle?
I suspect I would. But not only does utility not add linearly, you can’t just flip the sign, because positive and negative are calculated by different systems.
I don’t think it’s silly at all.
Personally, I experience more or less the same internal struggle with this question as with the other: I endorse the idea that what matters is total utility, but my moral intuitions aren’t entirely aligned with that idea, so I keep wanting to choose the individual benefit (joy or non-torture) despite being unable to justify choosing it.
Also, as David Gerard says, it’s a different function… that is, you can’t derive an answer to one question from an answer to the other… but the numbers we’re tossing around are so huge that the difference hardly matters.