Color me irrational, but in the problem as stated (a dust speck is a minor inconvenience, with zero chance of other consequences, unlike what some commenters suggest), there is no number of specks large enough to outweigh lasting torture (which ought to be properly defined, of course).
After digging through my inner utilities, the reason for my “obvious” choice is that everyone goes through minor annoyances all the time, and another speck of dust would be lost in the noise.
In a world where a speck of dust in the eye is a BIG DEAL, because the life is otherwise so PERFECT, even one speck is noticed and not quickly forgotten, such occurrences can be accumulated and compared with torture. However, this was not specified in the original problem, so I assume that people live through the calamities of the speck of dust magnitude all the time, and adding one more changes nothing.
And tell me, in a universe where a trillion agents individually decide that adding a dust of speck to the lives of 3^^^3 people is in your words “NOT A BIG DEAL”, and the end result is that you personally end up with a trillion specks of dust (each of them individually NOT A BIG DEAL), which leave you (and entire multiverses of beings) effectively blind—are they collectively still not a big deal then?
If it will be a big deal in such a scenario, then can you tell me which ones of the above trillion agents should have preferred to go with torturing a single person instead, and how they would be able to modify their decision theory to serve that purpose, if they individually must choose the specks but collectively must choose the torture (lest they leave entire multiverses and omniverses entirely blind)?
If you have reason to suspect a trillion people are making the same decision over the same set of people the calculation changes since dust specks in the same eye do not scale linearly.
which leave you (and entire multiverses of beings) effectively blind
I stipulated “noticed and not quickly forgotten” would be my condition for considering the other choice. Certainly being buried under a mountain of sand would qualify as noticeable by the unfortunate recipient.
But each individual dust speck wouldn’t be noticeable, and that’s each individual agent decides to add—an individual dust speck to the life of each such victim.
So, again, what decision theory can somehow dismiss the individual effect as you would have it do, and yet take into account the collective effect?
My personal decision theory has no problems dismissing noise-level influences, because they do not matter.
You keep trying to replace the original problem with your own: “how many sand specks constitute a heap?” This is not at issue here, as no heap is ever formed for any single one of the 3^^^3 people.
no heap is ever formed for anyone of the 3^^^3 people.
That’s not one of the guarantees you’re given, that a trillion other agents won’t be given similar choices. You’re not given the guarantee that your dilemma between minute disutility for astronomical numbers, and a single huge disutility will be the only such dilemma anyone will ever have in the history of the universe, and you don’t have the guarantee that the decisions of a trillion different agents won’t pile up.
Well, it looks like we found the root of our disagreement: I take the original problem literally, one blink and THAT’S IT, while you say “you don’t have the guarantee that the decisions of a trillion different agents won’t pile up”.
My version has an obvious solution (no torture), while yours has to be analyzed in detail for every possible potential pile up, and the impact has to be carefully calculated based on its probability, the number of people involved, and any other conceivable and inconceivable (i.e. at the probability level of 1/3^^^3) factors.
Until and unless there is a compelling evidence of an inevitable pile-up, I pick the no-torture solution. Feel free to prove that in a large chunk (>50%?) of all the impossible possible worlds the pile-up happens, and I will be happy to reevaluate my answer.
take the original problem literally, one blink and THAT’S IT
Every election is stolen one vote at a time.
My version has an obvious solution (no torture),
My version has also an obvious solution—choosing not to inflict disutility on 3^^^3 people.
and the impact has to be carefully calculated based on its probability,
That’s the useful thing about having such an absurdly large number as 3^^^3. We don’t really need to calculate it, “3^^^3” just wins. And if you feel it doesn’t win, then 3^^^^3 would win. Or 3^^^^^3. Add as many carets as you feel are necessary.
while yours has to be analyzed in detail for every possible potential pile up,
Thinking whether the world would be better or worse if everyone decided as you did, is really one of the fundamental methods of ethics, not a random bizarre scenario I just concocted up for this experiment.
Point is: If everyone decided as you would, it would pile up, and universes would be doomed to blindness. If everyone decided as I would, they would not pile up.
That’s the useful thing about having such an absurdly large number as 3^^^3. We don’t really need to calculate it, “3^^^3” just wins.
At this level, so many different low-probability factors come into play (e.g. blinking could be good for you because it reduces incidence of eye problems in some cases), that “choosing not to inflict disutility” relies on an unproven assumption that utility of blinking is always negative, no exceptions.
I reject unproven assumptions as torture justifications.
If the dust speck has a slight tendency to be bad, 3^^^3 wins.
If it does not have a slight tendency to be bad, it is not “the least bad bad thing that can happen to someone”—pick something worse for the thought experiment.
If the dust speck has a slight tendency to be bad, 3^^^3 wins.
Only if you agree to follow EY in consolidating many different utilities in every possible case into one all-encompassing number, something I am yet to be convinced of, but that is beside the point, I suppose.
If it does not have a slight tendency to be bad, it is not “the least bad bad thing that can happen to someone”—pick something worse for the thought experiment.
Sure, if you pick something with a guaranteed negative utility and you think that there should be one number to bind them all, I grant your point.
However, this is not how the problem appears to me. A single speck in the eye has such an insignificant utility, there is no way to estimate its effects without knowing a lot more about the problem.
Basically, I am uncomfortable with the following somewhat implicit assumptions, all of which are required to pick torture over nuisance:
a tiny utility can be reasonably well estimated, even up to a sign
zillions of those utilities can be combined into one single number using a monotonic function
these utilities do not interact in any way that would make their combination change sign
the resulting number is invariably useful for decision making
A breakdown in any of these assumptions would mean needless torture of a human being, and I do not have enough confidence in EY’s theoretical work to stake my decision on it.
Only if you agree to follow EY in consolidating many different utilities in every possible case into one all-encompassing number, something I am yet to be convinced of, but that is beside the point, I suppose.
If you have a preference for some outcomes versus other outcomes, you are effectively assigning a single number to those outcomes. The method of combining these is certainly a viable topic for dispute—I raised that point myself quite recently.
Sure, if you pick something with a guaranteed negative utility and you think that there should be one number to bind them all, I grant your point.
However, this is not how the problem appears to me. A single speck in the eye has such an insignificant utility, there is no way to estimate its effects without knowing a lot more about the problem.
It was quite explicitly made a part of the original formulation of the problem.
Considering the assumptions you are unwilling to make:
tiny utility can be reasonably well estimated, even up to a sign
As I’ve been saying, there quite clearly seem to be things that fall in the realm of “I am confident this is typically a bad thing” and “it runs counter to my intuition that I would prefer torture to this, regardless of how many people it applied to”.
the resulting number is invariably useful for decision making
I addressed this at the top of this post.
zillions of those utilities can be combined into one single number using a monotonic function
these utilities do not interact in any way that would make their combination change sign
I think it’s clear that there must be some means of combining individual preferences into moral judgments, if there is a morality at all. I am not certain that it can be done with the utility numbers alone. I am reasonably certain that it is monotonic—I cannot conceive of a situation where we would prefer some people to be less happy just for the sake of them being less happy. What is needed here is more than just monotonicity, however—it is necessary that it be divergent with fixed utility across infinite people. I raise this point here, and at this point think this is the closest to a reasonable attack on Eliezer’s argument.
On balance, I think Eliezer is likely to be correct; I do not have sufficient worry that I would stake some percent of 3^^^3 utilons on the contrary and would presently pick torture if I was truly confronted with this situation and didn’t have more time to discuss, debate, and analyze. Given that there is insufficient stuff in the universe to make 3^^^3 dust specks, much less the eyes for them to fly into, I am supremely confident that I won’t be confronted with this choice any time soon.
The point of “torture vs specks” is whether enough tiny disutilities can add up to something bigger than a single huge disutility. To argue that specks may on average have positive utility kinda misses the point, because the point we’re debating isn’t the value of a dust speck, or a sneeze, or a stubbed toe, or an itchy butt, or whatever—we’re just using dust speck as an example of the tiniest bit of disutility you can imagine, but which nonetheless we can agree is disutility.
If dust specks don’t suit you for this purpose, find another bit of tiny disutility, as tiny as you can make it.
(As a sidenote the point is missed on the opposite direction by those who say “well, say there’s a one billionth chance of a dust speck causing a fatal accident, you would then be killing untold numbers of people if you inflicted 3^^^^3 specks.”—these people don’t add up tiny disutilities, they add up tiny probabilities. They make the right decision in rejecting the specks, but it’s not the actual point of the question)
I reject unproven assumptions as torture justifications.
Well, I can reject your unproven assumptions as justifications for inflicting disutility on 3^^^3 people, same way that I suppose spammers can excuse billions of spam by saying to themselves “it just takes a second to delete it, so it doesn’t hurt anyone much”, while not considering that these multiplied means they’ve wasted billions of seconds from the lives of people...
I think the purpose of this article is to point to some intuitive failures of a simple linear utility function. In other words, probably everyone who reads it agrees with you. The real challenge is in creating a utility function that wouldn’t output the wrong answer on corner cases like this.
Sorry I’ve read that and still don’t know what it is that I’ve got wrong. Does this article not indicate a problem with simple linear utility functions, or is that not its purpose?
While some people tried to appeal to non-linear aggregation, you would have to appeal to a non-linear aggregation which was non-linear enough to reduce 3^^^3 to a small constant.
whereas myself and many others appeal to zero-aggregation, which indeed reduces any finite number (and hence the limit when this aggregation is taken to infinity) to zero.
The distinction is not that of rationality vs irrationality (e.g. scope insensitivity), but of the problem setup.
If you can explain zero aggregation in more detail, or point me to a reference, that would be appreciated, since I haven’t seen any full discussion of it.
The wrong answer is the people who prefer the specks, because that’s the answer which, if a trillion people answered that way, would condemn whole universes to blindness (instead of a mere trillion beings to torture).
Adding multiple dust specks to the same people definitely removes the linear character of the dust speck harm—if you take the number of dust specks necessary to make someone blind and spread them out to a lot more people you drastically reduce the total harm. So that is not an appropriate way of reformulating the question. You are correct that the specks are the “wrong answer” as far as the author is concerned.
Did the people choosing “specks” ask whether the persons in question would have suffered other dust specks (or sneezes, hiccups, stubbed toes, etc) immediately previous by potentially other agents deciding as they did, when they chose “specks”?
Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?
Which isn’t the same as asking what people would do if they were given the power to choose one or the other. And even if people were asked that the latter is plausible they would not assume the existence of a trillion other agents making the same decision over the same set of people. That’s a rather non-obvious addition to the thought experiment which is already foreign to everyday experience.
In any case it’s just not the point of the thought experiment. Take the least convenient possible world: do you still choose torture if you know for sure there are no other agents choosing as you are over the same set of people?
do you still choose torture if you know for sure there are no other agents choosing as you are over the same set of people?
Yes. The consideration of how the world would look like if everyone chose the same as me, is a useful intuition pumper, but it just illustrates the ethics of the situation, it doesn’t truly modify them.
Any choice isn’t really just about that particular choice, it’s about the mechanism you use to arrive at that choice. If people believe that it doesn’t matter how many people they each inflict tiny disutilities on, the world ends up worse off.
The point of the article is to illustrate scope insensitivity in the human utility function. Turning the problem into a collective action problem or an acausal decision theory problem by adding additional details to the hypothetical is not a useful intuition pump since it changes the entire character of the question.
For example, consider the following choice: You can give a gram of chocolate to 3^^^3 children who have never had chocolate before. Or you can torture someone for 50 years.
Easy. Everyone should have the same answer.
But wait! You forgot to consider that trillions of other people were being given the same choice! Now 3^^^3 children have diabetes.
This is exactly what you’re doing with your intuition pump except the value of eating additional chocolate inverts at a certain point whereas dust specks in your eye get exponentially worse at a certain point. In both cases the utility function is not linear and thus distorts the problem.
Only if you assume that the dust speck decisions must be made in utter ignorance of the (trillion-1) other decisions. If the ignorance is less than utter, a nonlinear utility function that accepts the one dust speck will stop making the decision in favor of dust specks before universes go blind.
For example, since I know how Texas will vote for President next year (it will give its Electoral College votes to the Republican), I can instead use my vote to signal which minor-party candidate strikes me as the most attractive, thus promoting his party relative to the others, without having to worry whether my vote will elect him or cost my preferred candidate the election. Obviously, if everyone else in Texas did the same, some minor party candidate would win, but that doesn’t matter, because it isn’t going to happen.
Color me irrational, but in the problem as stated (a dust speck is a minor inconvenience, with zero chance of other consequences, unlike what some commenters suggest), there is no number of specks large enough to outweigh lasting torture (which ought to be properly defined, of course).
After digging through my inner utilities, the reason for my “obvious” choice is that everyone goes through minor annoyances all the time, and another speck of dust would be lost in the noise.
In a world where a speck of dust in the eye is a BIG DEAL, because the life is otherwise so PERFECT, even one speck is noticed and not quickly forgotten, such occurrences can be accumulated and compared with torture. However, this was not specified in the original problem, so I assume that people live through the calamities of the speck of dust magnitude all the time, and adding one more changes nothing.
Eliezer’s question for you is “would you give one penny to prevent the 3^^^3 dust specks?”
And tell me, in a universe where a trillion agents individually decide that adding a dust of speck to the lives of 3^^^3 people is in your words “NOT A BIG DEAL”, and the end result is that you personally end up with a trillion specks of dust (each of them individually NOT A BIG DEAL), which leave you (and entire multiverses of beings) effectively blind—are they collectively still not a big deal then?
If it will be a big deal in such a scenario, then can you tell me which ones of the above trillion agents should have preferred to go with torturing a single person instead, and how they would be able to modify their decision theory to serve that purpose, if they individually must choose the specks but collectively must choose the torture (lest they leave entire multiverses and omniverses entirely blind)?
If you have reason to suspect a trillion people are making the same decision over the same set of people the calculation changes since dust specks in the same eye do not scale linearly.
I stipulated “noticed and not quickly forgotten” would be my condition for considering the other choice. Certainly being buried under a mountain of sand would qualify as noticeable by the unfortunate recipient.
But each individual dust speck wouldn’t be noticeable, and that’s each individual agent decides to add—an individual dust speck to the life of each such victim.
So, again, what decision theory can somehow dismiss the individual effect as you would have it do, and yet take into account the collective effect?
My personal decision theory has no problems dismissing noise-level influences, because they do not matter.
You keep trying to replace the original problem with your own: “how many sand specks constitute a heap?” This is not at issue here, as no heap is ever formed for any single one of the 3^^^3 people.
That’s not one of the guarantees you’re given, that a trillion other agents won’t be given similar choices. You’re not given the guarantee that your dilemma between minute disutility for astronomical numbers, and a single huge disutility will be the only such dilemma anyone will ever have in the history of the universe, and you don’t have the guarantee that the decisions of a trillion different agents won’t pile up.
Well, it looks like we found the root of our disagreement: I take the original problem literally, one blink and THAT’S IT, while you say “you don’t have the guarantee that the decisions of a trillion different agents won’t pile up”.
My version has an obvious solution (no torture), while yours has to be analyzed in detail for every possible potential pile up, and the impact has to be carefully calculated based on its probability, the number of people involved, and any other conceivable and inconceivable (i.e. at the probability level of 1/3^^^3) factors.
Until and unless there is a compelling evidence of an inevitable pile-up, I pick the no-torture solution. Feel free to prove that in a large chunk (>50%?) of all the impossible possible worlds the pile-up happens, and I will be happy to reevaluate my answer.
Every election is stolen one vote at a time.
My version has also an obvious solution—choosing not to inflict disutility on 3^^^3 people.
That’s the useful thing about having such an absurdly large number as 3^^^3. We don’t really need to calculate it, “3^^^3” just wins. And if you feel it doesn’t win, then 3^^^^3 would win. Or 3^^^^^3. Add as many carets as you feel are necessary.
Thinking whether the world would be better or worse if everyone decided as you did, is really one of the fundamental methods of ethics, not a random bizarre scenario I just concocted up for this experiment.
Point is: If everyone decided as you would, it would pile up, and universes would be doomed to blindness. If everyone decided as I would, they would not pile up.
Prove it.
At this level, so many different low-probability factors come into play (e.g. blinking could be good for you because it reduces incidence of eye problems in some cases), that “choosing not to inflict disutility” relies on an unproven assumption that utility of blinking is always negative, no exceptions.
I reject unproven assumptions as torture justifications.
If the dust speck has a slight tendency to be bad, 3^^^3 wins.
If it does not have a slight tendency to be bad, it is not “the least bad bad thing that can happen to someone”—pick something worse for the thought experiment.
Only if you agree to follow EY in consolidating many different utilities in every possible case into one all-encompassing number, something I am yet to be convinced of, but that is beside the point, I suppose.
Sure, if you pick something with a guaranteed negative utility and you think that there should be one number to bind them all, I grant your point.
However, this is not how the problem appears to me. A single speck in the eye has such an insignificant utility, there is no way to estimate its effects without knowing a lot more about the problem.
Basically, I am uncomfortable with the following somewhat implicit assumptions, all of which are required to pick torture over nuisance:
a tiny utility can be reasonably well estimated, even up to a sign
zillions of those utilities can be combined into one single number using a monotonic function
these utilities do not interact in any way that would make their combination change sign
the resulting number is invariably useful for decision making
A breakdown in any of these assumptions would mean needless torture of a human being, and I do not have enough confidence in EY’s theoretical work to stake my decision on it.
If you have a preference for some outcomes versus other outcomes, you are effectively assigning a single number to those outcomes. The method of combining these is certainly a viable topic for dispute—I raised that point myself quite recently.
It was quite explicitly made a part of the original formulation of the problem.
Considering the assumptions you are unwilling to make:
As I’ve been saying, there quite clearly seem to be things that fall in the realm of “I am confident this is typically a bad thing” and “it runs counter to my intuition that I would prefer torture to this, regardless of how many people it applied to”.
I addressed this at the top of this post.
I think it’s clear that there must be some means of combining individual preferences into moral judgments, if there is a morality at all. I am not certain that it can be done with the utility numbers alone. I am reasonably certain that it is monotonic—I cannot conceive of a situation where we would prefer some people to be less happy just for the sake of them being less happy. What is needed here is more than just monotonicity, however—it is necessary that it be divergent with fixed utility across infinite people. I raise this point here, and at this point think this is the closest to a reasonable attack on Eliezer’s argument.
On balance, I think Eliezer is likely to be correct; I do not have sufficient worry that I would stake some percent of 3^^^3 utilons on the contrary and would presently pick torture if I was truly confronted with this situation and didn’t have more time to discuss, debate, and analyze. Given that there is insufficient stuff in the universe to make 3^^^3 dust specks, much less the eyes for them to fly into, I am supremely confident that I won’t be confronted with this choice any time soon.
The point of “torture vs specks” is whether enough tiny disutilities can add up to something bigger than a single huge disutility. To argue that specks may on average have positive utility kinda misses the point, because the point we’re debating isn’t the value of a dust speck, or a sneeze, or a stubbed toe, or an itchy butt, or whatever—we’re just using dust speck as an example of the tiniest bit of disutility you can imagine, but which nonetheless we can agree is disutility.
If dust specks don’t suit you for this purpose, find another bit of tiny disutility, as tiny as you can make it.
(As a sidenote the point is missed on the opposite direction by those who say “well, say there’s a one billionth chance of a dust speck causing a fatal accident, you would then be killing untold numbers of people if you inflicted 3^^^^3 specks.”—these people don’t add up tiny disutilities, they add up tiny probabilities. They make the right decision in rejecting the specks, but it’s not the actual point of the question)
Well, I can reject your unproven assumptions as justifications for inflicting disutility on 3^^^3 people, same way that I suppose spammers can excuse billions of spam by saying to themselves “it just takes a second to delete it, so it doesn’t hurt anyone much”, while not considering that these multiplied means they’ve wasted billions of seconds from the lives of people...
I think the purpose of this article is to point to some intuitive failures of a simple linear utility function. In other words, probably everyone who reads it agrees with you. The real challenge is in creating a utility function that wouldn’t output the wrong answer on corner cases like this.
No. No, that is not the purpose of the article.
Sorry I’ve read that and still don’t know what it is that I’ve got wrong. Does this article not indicate a problem with simple linear utility functions, or is that not its purpose?
Eliezer disagrees
His point of view is
whereas myself and many others appeal to zero-aggregation, which indeed reduces any finite number (and hence the limit when this aggregation is taken to infinity) to zero.
The distinction is not that of rationality vs irrationality (e.g. scope insensitivity), but of the problem setup.
If you can explain zero aggregation in more detail, or point me to a reference, that would be appreciated, since I haven’t seen any full discussion of it.
The wrong answer is the people who prefer the specks, because that’s the answer which, if a trillion people answered that way, would condemn whole universes to blindness (instead of a mere trillion beings to torture).
Adding multiple dust specks to the same people definitely removes the linear character of the dust speck harm—if you take the number of dust specks necessary to make someone blind and spread them out to a lot more people you drastically reduce the total harm. So that is not an appropriate way of reformulating the question. You are correct that the specks are the “wrong answer” as far as the author is concerned.
Did the people choosing “specks” ask whether the persons in question would have suffered other dust specks (or sneezes, hiccups, stubbed toes, etc) immediately previous by potentially other agents deciding as they did, when they chose “specks”?
Most people I didn’t, I suppose—they were asked:
Which isn’t the same as asking what people would do if they were given the power to choose one or the other. And even if people were asked that the latter is plausible they would not assume the existence of a trillion other agents making the same decision over the same set of people. That’s a rather non-obvious addition to the thought experiment which is already foreign to everyday experience.
In any case it’s just not the point of the thought experiment. Take the least convenient possible world: do you still choose torture if you know for sure there are no other agents choosing as you are over the same set of people?
Yes. The consideration of how the world would look like if everyone chose the same as me, is a useful intuition pumper, but it just illustrates the ethics of the situation, it doesn’t truly modify them.
Any choice isn’t really just about that particular choice, it’s about the mechanism you use to arrive at that choice. If people believe that it doesn’t matter how many people they each inflict tiny disutilities on, the world ends up worse off.
The point of the article is to illustrate scope insensitivity in the human utility function. Turning the problem into a collective action problem or an acausal decision theory problem by adding additional details to the hypothetical is not a useful intuition pump since it changes the entire character of the question.
For example, consider the following choice: You can give a gram of chocolate to 3^^^3 children who have never had chocolate before. Or you can torture someone for 50 years.
Easy. Everyone should have the same answer.
But wait! You forgot to consider that trillions of other people were being given the same choice! Now 3^^^3 children have diabetes.
This is exactly what you’re doing with your intuition pump except the value of eating additional chocolate inverts at a certain point whereas dust specks in your eye get exponentially worse at a certain point. In both cases the utility function is not linear and thus distorts the problem.
Only if you assume that the dust speck decisions must be made in utter ignorance of the (trillion-1) other decisions. If the ignorance is less than utter, a nonlinear utility function that accepts the one dust speck will stop making the decision in favor of dust specks before universes go blind.
For example, since I know how Texas will vote for President next year (it will give its Electoral College votes to the Republican), I can instead use my vote to signal which minor-party candidate strikes me as the most attractive, thus promoting his party relative to the others, without having to worry whether my vote will elect him or cost my preferred candidate the election. Obviously, if everyone else in Texas did the same, some minor party candidate would win, but that doesn’t matter, because it isn’t going to happen.