Eliezer: after wrestling with this for a while, I think I’ve identified at least one of the reasons for all the fighting. First of all, I agree with you that the people who say, “3^^^3 isn’t large enough” are off-base. If there’s some N that justifies the tradeoff, 3^^^3 is almost certainly big enough; and even if it isn’t, we can change the number to 4^^^4, or 3^^^^3, or Busy Beaver (Busy Beaver (3^^^3)), or something, and we’re back to the original problem.
For me, at least, the problem comes down to what ‘preference’ means. I don’t think I have any coherent preferences over the idea of 3^^^3 dust specks. Note, I don’t mean that I think my preferences are inconsistent, or poorly-formed, or that my intuition is bad. I don’t think that talking about my preferences on that issue has any meaning.
Basically, I don’t believe there’s any objective standard of value. Even preferences like “I think as many people should die as painfully as possible” aren’t wrong, per se; they just put you beyond the bounds of civilized society and make me have no desire to interact with you any more. So asking which of two circumstances is ‘really better’ doesn’t have any meaning; ‘better’ only makes sense when you ask ‘better to whom.’ Which leads to two problems.
First is that the question tends to slip over to “which choice would you make.” But once I start phrasing it in terms of me making a choice, all my procedural safeguards start kicking in. First, if you’re a true deontologist your mental side constraints start jumping in. Even if you’re sort-of utilitarian, like I am, the mental rules that say things like “we can’t be sure that 3^^^3 people are actually going to suffer” and “helping to forge a society that considers torture acceptable leads to horrifying long-term consequences” kick in. I agree those are outside of the parameters of the original question; but the original question was ill-posed, and this is one of the places it slips to in translation.
But even if you avoid that, you still come to the question of what it means to prefer A over B, when you have no meaningful choice in the matter. I can’t imagine a situation in which I could cause 3^^^3 people any coherent result. I’m not sure I believe there are or ever will be 3^^^3 moral agents. And do I have a coherent preference over circumstances that I will never know have occurred? Even if 3^^^3 people suffer, I’m not going to know that they do. It won’t affect me, and I won’t know that it affected anyone else, either.
Basically, moral questions that involve wildly unlikely or outright impossible scenarios don’t tend to be terribly enlightening. If we lived in a world where we could reliably benefit unimaginably large numbers of people by causing vast pain to a few, maybe that would be okay. But since we don’t, I think hypotheticals like this are more likely to short-circuit on the bounds of our extremely useful assumptions about the nature of the world than they are to tell us anything interesting.
Eliezer: after wrestling with this for a while, I think I’ve identified at least one of the reasons for all the fighting. First of all, I agree with you that the people who say, “3^^^3 isn’t large enough” are off-base. If there’s some N that justifies the tradeoff, 3^^^3 is almost certainly big enough; and even if it isn’t, we can change the number to 4^^^4, or 3^^^^3, or Busy Beaver (Busy Beaver (3^^^3)), or something, and we’re back to the original problem.
For me, at least, the problem comes down to what ‘preference’ means. I don’t think I have any coherent preferences over the idea of 3^^^3 dust specks. Note, I don’t mean that I think my preferences are inconsistent, or poorly-formed, or that my intuition is bad. I don’t think that talking about my preferences on that issue has any meaning.
Basically, I don’t believe there’s any objective standard of value. Even preferences like “I think as many people should die as painfully as possible” aren’t wrong, per se; they just put you beyond the bounds of civilized society and make me have no desire to interact with you any more. So asking which of two circumstances is ‘really better’ doesn’t have any meaning; ‘better’ only makes sense when you ask ‘better to whom.’ Which leads to two problems.
First is that the question tends to slip over to “which choice would you make.” But once I start phrasing it in terms of me making a choice, all my procedural safeguards start kicking in. First, if you’re a true deontologist your mental side constraints start jumping in. Even if you’re sort-of utilitarian, like I am, the mental rules that say things like “we can’t be sure that 3^^^3 people are actually going to suffer” and “helping to forge a society that considers torture acceptable leads to horrifying long-term consequences” kick in. I agree those are outside of the parameters of the original question; but the original question was ill-posed, and this is one of the places it slips to in translation.
But even if you avoid that, you still come to the question of what it means to prefer A over B, when you have no meaningful choice in the matter. I can’t imagine a situation in which I could cause 3^^^3 people any coherent result. I’m not sure I believe there are or ever will be 3^^^3 moral agents. And do I have a coherent preference over circumstances that I will never know have occurred? Even if 3^^^3 people suffer, I’m not going to know that they do. It won’t affect me, and I won’t know that it affected anyone else, either.
Basically, moral questions that involve wildly unlikely or outright impossible scenarios don’t tend to be terribly enlightening. If we lived in a world where we could reliably benefit unimaginably large numbers of people by causing vast pain to a few, maybe that would be okay. But since we don’t, I think hypotheticals like this are more likely to short-circuit on the bounds of our extremely useful assumptions about the nature of the world than they are to tell us anything interesting.