Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?
Which isn’t the same as asking what people would do if they were given the power to choose one or the other. And even if people were asked that the latter is plausible they would not assume the existence of a trillion other agents making the same decision over the same set of people. That’s a rather non-obvious addition to the thought experiment which is already foreign to everyday experience.
In any case it’s just not the point of the thought experiment. Take the least convenient possible world: do you still choose torture if you know for sure there are no other agents choosing as you are over the same set of people?
do you still choose torture if you know for sure there are no other agents choosing as you are over the same set of people?
Yes. The consideration of how the world would look like if everyone chose the same as me, is a useful intuition pumper, but it just illustrates the ethics of the situation, it doesn’t truly modify them.
Any choice isn’t really just about that particular choice, it’s about the mechanism you use to arrive at that choice. If people believe that it doesn’t matter how many people they each inflict tiny disutilities on, the world ends up worse off.
The point of the article is to illustrate scope insensitivity in the human utility function. Turning the problem into a collective action problem or an acausal decision theory problem by adding additional details to the hypothetical is not a useful intuition pump since it changes the entire character of the question.
For example, consider the following choice: You can give a gram of chocolate to 3^^^3 children who have never had chocolate before. Or you can torture someone for 50 years.
Easy. Everyone should have the same answer.
But wait! You forgot to consider that trillions of other people were being given the same choice! Now 3^^^3 children have diabetes.
This is exactly what you’re doing with your intuition pump except the value of eating additional chocolate inverts at a certain point whereas dust specks in your eye get exponentially worse at a certain point. In both cases the utility function is not linear and thus distorts the problem.
Most people I didn’t, I suppose—they were asked:
Which isn’t the same as asking what people would do if they were given the power to choose one or the other. And even if people were asked that the latter is plausible they would not assume the existence of a trillion other agents making the same decision over the same set of people. That’s a rather non-obvious addition to the thought experiment which is already foreign to everyday experience.
In any case it’s just not the point of the thought experiment. Take the least convenient possible world: do you still choose torture if you know for sure there are no other agents choosing as you are over the same set of people?
Yes. The consideration of how the world would look like if everyone chose the same as me, is a useful intuition pumper, but it just illustrates the ethics of the situation, it doesn’t truly modify them.
Any choice isn’t really just about that particular choice, it’s about the mechanism you use to arrive at that choice. If people believe that it doesn’t matter how many people they each inflict tiny disutilities on, the world ends up worse off.
The point of the article is to illustrate scope insensitivity in the human utility function. Turning the problem into a collective action problem or an acausal decision theory problem by adding additional details to the hypothetical is not a useful intuition pump since it changes the entire character of the question.
For example, consider the following choice: You can give a gram of chocolate to 3^^^3 children who have never had chocolate before. Or you can torture someone for 50 years.
Easy. Everyone should have the same answer.
But wait! You forgot to consider that trillions of other people were being given the same choice! Now 3^^^3 children have diabetes.
This is exactly what you’re doing with your intuition pump except the value of eating additional chocolate inverts at a certain point whereas dust specks in your eye get exponentially worse at a certain point. In both cases the utility function is not linear and thus distorts the problem.