I think the purpose of this article is to point to some intuitive failures of a simple linear utility function. In other words, probably everyone who reads it agrees with you. The real challenge is in creating a utility function that wouldn’t output the wrong answer on corner cases like this.
Sorry I’ve read that and still don’t know what it is that I’ve got wrong. Does this article not indicate a problem with simple linear utility functions, or is that not its purpose?
While some people tried to appeal to non-linear aggregation, you would have to appeal to a non-linear aggregation which was non-linear enough to reduce 3^^^3 to a small constant.
whereas myself and many others appeal to zero-aggregation, which indeed reduces any finite number (and hence the limit when this aggregation is taken to infinity) to zero.
The distinction is not that of rationality vs irrationality (e.g. scope insensitivity), but of the problem setup.
If you can explain zero aggregation in more detail, or point me to a reference, that would be appreciated, since I haven’t seen any full discussion of it.
The wrong answer is the people who prefer the specks, because that’s the answer which, if a trillion people answered that way, would condemn whole universes to blindness (instead of a mere trillion beings to torture).
Adding multiple dust specks to the same people definitely removes the linear character of the dust speck harm—if you take the number of dust specks necessary to make someone blind and spread them out to a lot more people you drastically reduce the total harm. So that is not an appropriate way of reformulating the question. You are correct that the specks are the “wrong answer” as far as the author is concerned.
Did the people choosing “specks” ask whether the persons in question would have suffered other dust specks (or sneezes, hiccups, stubbed toes, etc) immediately previous by potentially other agents deciding as they did, when they chose “specks”?
Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?
Which isn’t the same as asking what people would do if they were given the power to choose one or the other. And even if people were asked that the latter is plausible they would not assume the existence of a trillion other agents making the same decision over the same set of people. That’s a rather non-obvious addition to the thought experiment which is already foreign to everyday experience.
In any case it’s just not the point of the thought experiment. Take the least convenient possible world: do you still choose torture if you know for sure there are no other agents choosing as you are over the same set of people?
do you still choose torture if you know for sure there are no other agents choosing as you are over the same set of people?
Yes. The consideration of how the world would look like if everyone chose the same as me, is a useful intuition pumper, but it just illustrates the ethics of the situation, it doesn’t truly modify them.
Any choice isn’t really just about that particular choice, it’s about the mechanism you use to arrive at that choice. If people believe that it doesn’t matter how many people they each inflict tiny disutilities on, the world ends up worse off.
The point of the article is to illustrate scope insensitivity in the human utility function. Turning the problem into a collective action problem or an acausal decision theory problem by adding additional details to the hypothetical is not a useful intuition pump since it changes the entire character of the question.
For example, consider the following choice: You can give a gram of chocolate to 3^^^3 children who have never had chocolate before. Or you can torture someone for 50 years.
Easy. Everyone should have the same answer.
But wait! You forgot to consider that trillions of other people were being given the same choice! Now 3^^^3 children have diabetes.
This is exactly what you’re doing with your intuition pump except the value of eating additional chocolate inverts at a certain point whereas dust specks in your eye get exponentially worse at a certain point. In both cases the utility function is not linear and thus distorts the problem.
Only if you assume that the dust speck decisions must be made in utter ignorance of the (trillion-1) other decisions. If the ignorance is less than utter, a nonlinear utility function that accepts the one dust speck will stop making the decision in favor of dust specks before universes go blind.
For example, since I know how Texas will vote for President next year (it will give its Electoral College votes to the Republican), I can instead use my vote to signal which minor-party candidate strikes me as the most attractive, thus promoting his party relative to the others, without having to worry whether my vote will elect him or cost my preferred candidate the election. Obviously, if everyone else in Texas did the same, some minor party candidate would win, but that doesn’t matter, because it isn’t going to happen.
I think the purpose of this article is to point to some intuitive failures of a simple linear utility function. In other words, probably everyone who reads it agrees with you. The real challenge is in creating a utility function that wouldn’t output the wrong answer on corner cases like this.
No. No, that is not the purpose of the article.
Sorry I’ve read that and still don’t know what it is that I’ve got wrong. Does this article not indicate a problem with simple linear utility functions, or is that not its purpose?
Eliezer disagrees
His point of view is
whereas myself and many others appeal to zero-aggregation, which indeed reduces any finite number (and hence the limit when this aggregation is taken to infinity) to zero.
The distinction is not that of rationality vs irrationality (e.g. scope insensitivity), but of the problem setup.
If you can explain zero aggregation in more detail, or point me to a reference, that would be appreciated, since I haven’t seen any full discussion of it.
The wrong answer is the people who prefer the specks, because that’s the answer which, if a trillion people answered that way, would condemn whole universes to blindness (instead of a mere trillion beings to torture).
Adding multiple dust specks to the same people definitely removes the linear character of the dust speck harm—if you take the number of dust specks necessary to make someone blind and spread them out to a lot more people you drastically reduce the total harm. So that is not an appropriate way of reformulating the question. You are correct that the specks are the “wrong answer” as far as the author is concerned.
Did the people choosing “specks” ask whether the persons in question would have suffered other dust specks (or sneezes, hiccups, stubbed toes, etc) immediately previous by potentially other agents deciding as they did, when they chose “specks”?
Most people I didn’t, I suppose—they were asked:
Which isn’t the same as asking what people would do if they were given the power to choose one or the other. And even if people were asked that the latter is plausible they would not assume the existence of a trillion other agents making the same decision over the same set of people. That’s a rather non-obvious addition to the thought experiment which is already foreign to everyday experience.
In any case it’s just not the point of the thought experiment. Take the least convenient possible world: do you still choose torture if you know for sure there are no other agents choosing as you are over the same set of people?
Yes. The consideration of how the world would look like if everyone chose the same as me, is a useful intuition pumper, but it just illustrates the ethics of the situation, it doesn’t truly modify them.
Any choice isn’t really just about that particular choice, it’s about the mechanism you use to arrive at that choice. If people believe that it doesn’t matter how many people they each inflict tiny disutilities on, the world ends up worse off.
The point of the article is to illustrate scope insensitivity in the human utility function. Turning the problem into a collective action problem or an acausal decision theory problem by adding additional details to the hypothetical is not a useful intuition pump since it changes the entire character of the question.
For example, consider the following choice: You can give a gram of chocolate to 3^^^3 children who have never had chocolate before. Or you can torture someone for 50 years.
Easy. Everyone should have the same answer.
But wait! You forgot to consider that trillions of other people were being given the same choice! Now 3^^^3 children have diabetes.
This is exactly what you’re doing with your intuition pump except the value of eating additional chocolate inverts at a certain point whereas dust specks in your eye get exponentially worse at a certain point. In both cases the utility function is not linear and thus distorts the problem.
Only if you assume that the dust speck decisions must be made in utter ignorance of the (trillion-1) other decisions. If the ignorance is less than utter, a nonlinear utility function that accepts the one dust speck will stop making the decision in favor of dust specks before universes go blind.
For example, since I know how Texas will vote for President next year (it will give its Electoral College votes to the Republican), I can instead use my vote to signal which minor-party candidate strikes me as the most attractive, thus promoting his party relative to the others, without having to worry whether my vote will elect him or cost my preferred candidate the election. Obviously, if everyone else in Texas did the same, some minor party candidate would win, but that doesn’t matter, because it isn’t going to happen.