Well, it looks like we found the root of our disagreement: I take the original problem literally, one blink and THAT’S IT, while you say “you don’t have the guarantee that the decisions of a trillion different agents won’t pile up”.
My version has an obvious solution (no torture), while yours has to be analyzed in detail for every possible potential pile up, and the impact has to be carefully calculated based on its probability, the number of people involved, and any other conceivable and inconceivable (i.e. at the probability level of 1/3^^^3) factors.
Until and unless there is a compelling evidence of an inevitable pile-up, I pick the no-torture solution. Feel free to prove that in a large chunk (>50%?) of all the impossible possible worlds the pile-up happens, and I will be happy to reevaluate my answer.
take the original problem literally, one blink and THAT’S IT
Every election is stolen one vote at a time.
My version has an obvious solution (no torture),
My version has also an obvious solution—choosing not to inflict disutility on 3^^^3 people.
and the impact has to be carefully calculated based on its probability,
That’s the useful thing about having such an absurdly large number as 3^^^3. We don’t really need to calculate it, “3^^^3” just wins. And if you feel it doesn’t win, then 3^^^^3 would win. Or 3^^^^^3. Add as many carets as you feel are necessary.
while yours has to be analyzed in detail for every possible potential pile up,
Thinking whether the world would be better or worse if everyone decided as you did, is really one of the fundamental methods of ethics, not a random bizarre scenario I just concocted up for this experiment.
Point is: If everyone decided as you would, it would pile up, and universes would be doomed to blindness. If everyone decided as I would, they would not pile up.
That’s the useful thing about having such an absurdly large number as 3^^^3. We don’t really need to calculate it, “3^^^3” just wins.
At this level, so many different low-probability factors come into play (e.g. blinking could be good for you because it reduces incidence of eye problems in some cases), that “choosing not to inflict disutility” relies on an unproven assumption that utility of blinking is always negative, no exceptions.
I reject unproven assumptions as torture justifications.
If the dust speck has a slight tendency to be bad, 3^^^3 wins.
If it does not have a slight tendency to be bad, it is not “the least bad bad thing that can happen to someone”—pick something worse for the thought experiment.
If the dust speck has a slight tendency to be bad, 3^^^3 wins.
Only if you agree to follow EY in consolidating many different utilities in every possible case into one all-encompassing number, something I am yet to be convinced of, but that is beside the point, I suppose.
If it does not have a slight tendency to be bad, it is not “the least bad bad thing that can happen to someone”—pick something worse for the thought experiment.
Sure, if you pick something with a guaranteed negative utility and you think that there should be one number to bind them all, I grant your point.
However, this is not how the problem appears to me. A single speck in the eye has such an insignificant utility, there is no way to estimate its effects without knowing a lot more about the problem.
Basically, I am uncomfortable with the following somewhat implicit assumptions, all of which are required to pick torture over nuisance:
a tiny utility can be reasonably well estimated, even up to a sign
zillions of those utilities can be combined into one single number using a monotonic function
these utilities do not interact in any way that would make their combination change sign
the resulting number is invariably useful for decision making
A breakdown in any of these assumptions would mean needless torture of a human being, and I do not have enough confidence in EY’s theoretical work to stake my decision on it.
Only if you agree to follow EY in consolidating many different utilities in every possible case into one all-encompassing number, something I am yet to be convinced of, but that is beside the point, I suppose.
If you have a preference for some outcomes versus other outcomes, you are effectively assigning a single number to those outcomes. The method of combining these is certainly a viable topic for dispute—I raised that point myself quite recently.
Sure, if you pick something with a guaranteed negative utility and you think that there should be one number to bind them all, I grant your point.
However, this is not how the problem appears to me. A single speck in the eye has such an insignificant utility, there is no way to estimate its effects without knowing a lot more about the problem.
It was quite explicitly made a part of the original formulation of the problem.
Considering the assumptions you are unwilling to make:
tiny utility can be reasonably well estimated, even up to a sign
As I’ve been saying, there quite clearly seem to be things that fall in the realm of “I am confident this is typically a bad thing” and “it runs counter to my intuition that I would prefer torture to this, regardless of how many people it applied to”.
the resulting number is invariably useful for decision making
I addressed this at the top of this post.
zillions of those utilities can be combined into one single number using a monotonic function
these utilities do not interact in any way that would make their combination change sign
I think it’s clear that there must be some means of combining individual preferences into moral judgments, if there is a morality at all. I am not certain that it can be done with the utility numbers alone. I am reasonably certain that it is monotonic—I cannot conceive of a situation where we would prefer some people to be less happy just for the sake of them being less happy. What is needed here is more than just monotonicity, however—it is necessary that it be divergent with fixed utility across infinite people. I raise this point here, and at this point think this is the closest to a reasonable attack on Eliezer’s argument.
On balance, I think Eliezer is likely to be correct; I do not have sufficient worry that I would stake some percent of 3^^^3 utilons on the contrary and would presently pick torture if I was truly confronted with this situation and didn’t have more time to discuss, debate, and analyze. Given that there is insufficient stuff in the universe to make 3^^^3 dust specks, much less the eyes for them to fly into, I am supremely confident that I won’t be confronted with this choice any time soon.
The point of “torture vs specks” is whether enough tiny disutilities can add up to something bigger than a single huge disutility. To argue that specks may on average have positive utility kinda misses the point, because the point we’re debating isn’t the value of a dust speck, or a sneeze, or a stubbed toe, or an itchy butt, or whatever—we’re just using dust speck as an example of the tiniest bit of disutility you can imagine, but which nonetheless we can agree is disutility.
If dust specks don’t suit you for this purpose, find another bit of tiny disutility, as tiny as you can make it.
(As a sidenote the point is missed on the opposite direction by those who say “well, say there’s a one billionth chance of a dust speck causing a fatal accident, you would then be killing untold numbers of people if you inflicted 3^^^^3 specks.”—these people don’t add up tiny disutilities, they add up tiny probabilities. They make the right decision in rejecting the specks, but it’s not the actual point of the question)
I reject unproven assumptions as torture justifications.
Well, I can reject your unproven assumptions as justifications for inflicting disutility on 3^^^3 people, same way that I suppose spammers can excuse billions of spam by saying to themselves “it just takes a second to delete it, so it doesn’t hurt anyone much”, while not considering that these multiplied means they’ve wasted billions of seconds from the lives of people...
Well, it looks like we found the root of our disagreement: I take the original problem literally, one blink and THAT’S IT, while you say “you don’t have the guarantee that the decisions of a trillion different agents won’t pile up”.
My version has an obvious solution (no torture), while yours has to be analyzed in detail for every possible potential pile up, and the impact has to be carefully calculated based on its probability, the number of people involved, and any other conceivable and inconceivable (i.e. at the probability level of 1/3^^^3) factors.
Until and unless there is a compelling evidence of an inevitable pile-up, I pick the no-torture solution. Feel free to prove that in a large chunk (>50%?) of all the impossible possible worlds the pile-up happens, and I will be happy to reevaluate my answer.
Every election is stolen one vote at a time.
My version has also an obvious solution—choosing not to inflict disutility on 3^^^3 people.
That’s the useful thing about having such an absurdly large number as 3^^^3. We don’t really need to calculate it, “3^^^3” just wins. And if you feel it doesn’t win, then 3^^^^3 would win. Or 3^^^^^3. Add as many carets as you feel are necessary.
Thinking whether the world would be better or worse if everyone decided as you did, is really one of the fundamental methods of ethics, not a random bizarre scenario I just concocted up for this experiment.
Point is: If everyone decided as you would, it would pile up, and universes would be doomed to blindness. If everyone decided as I would, they would not pile up.
Prove it.
At this level, so many different low-probability factors come into play (e.g. blinking could be good for you because it reduces incidence of eye problems in some cases), that “choosing not to inflict disutility” relies on an unproven assumption that utility of blinking is always negative, no exceptions.
I reject unproven assumptions as torture justifications.
If the dust speck has a slight tendency to be bad, 3^^^3 wins.
If it does not have a slight tendency to be bad, it is not “the least bad bad thing that can happen to someone”—pick something worse for the thought experiment.
Only if you agree to follow EY in consolidating many different utilities in every possible case into one all-encompassing number, something I am yet to be convinced of, but that is beside the point, I suppose.
Sure, if you pick something with a guaranteed negative utility and you think that there should be one number to bind them all, I grant your point.
However, this is not how the problem appears to me. A single speck in the eye has such an insignificant utility, there is no way to estimate its effects without knowing a lot more about the problem.
Basically, I am uncomfortable with the following somewhat implicit assumptions, all of which are required to pick torture over nuisance:
a tiny utility can be reasonably well estimated, even up to a sign
zillions of those utilities can be combined into one single number using a monotonic function
these utilities do not interact in any way that would make their combination change sign
the resulting number is invariably useful for decision making
A breakdown in any of these assumptions would mean needless torture of a human being, and I do not have enough confidence in EY’s theoretical work to stake my decision on it.
If you have a preference for some outcomes versus other outcomes, you are effectively assigning a single number to those outcomes. The method of combining these is certainly a viable topic for dispute—I raised that point myself quite recently.
It was quite explicitly made a part of the original formulation of the problem.
Considering the assumptions you are unwilling to make:
As I’ve been saying, there quite clearly seem to be things that fall in the realm of “I am confident this is typically a bad thing” and “it runs counter to my intuition that I would prefer torture to this, regardless of how many people it applied to”.
I addressed this at the top of this post.
I think it’s clear that there must be some means of combining individual preferences into moral judgments, if there is a morality at all. I am not certain that it can be done with the utility numbers alone. I am reasonably certain that it is monotonic—I cannot conceive of a situation where we would prefer some people to be less happy just for the sake of them being less happy. What is needed here is more than just monotonicity, however—it is necessary that it be divergent with fixed utility across infinite people. I raise this point here, and at this point think this is the closest to a reasonable attack on Eliezer’s argument.
On balance, I think Eliezer is likely to be correct; I do not have sufficient worry that I would stake some percent of 3^^^3 utilons on the contrary and would presently pick torture if I was truly confronted with this situation and didn’t have more time to discuss, debate, and analyze. Given that there is insufficient stuff in the universe to make 3^^^3 dust specks, much less the eyes for them to fly into, I am supremely confident that I won’t be confronted with this choice any time soon.
The point of “torture vs specks” is whether enough tiny disutilities can add up to something bigger than a single huge disutility. To argue that specks may on average have positive utility kinda misses the point, because the point we’re debating isn’t the value of a dust speck, or a sneeze, or a stubbed toe, or an itchy butt, or whatever—we’re just using dust speck as an example of the tiniest bit of disutility you can imagine, but which nonetheless we can agree is disutility.
If dust specks don’t suit you for this purpose, find another bit of tiny disutility, as tiny as you can make it.
(As a sidenote the point is missed on the opposite direction by those who say “well, say there’s a one billionth chance of a dust speck causing a fatal accident, you would then be killing untold numbers of people if you inflicted 3^^^^3 specks.”—these people don’t add up tiny disutilities, they add up tiny probabilities. They make the right decision in rejecting the specks, but it’s not the actual point of the question)
Well, I can reject your unproven assumptions as justifications for inflicting disutility on 3^^^3 people, same way that I suppose spammers can excuse billions of spam by saying to themselves “it just takes a second to delete it, so it doesn’t hurt anyone much”, while not considering that these multiplied means they’ve wasted billions of seconds from the lives of people...