This was confronted in the Escalation Argument. Would you prefer 1000 people being tortured for 49 years to 1 person being tortured for 50 years? (If you would, take 1000 to 1000000 and 49 to 49.99, etc.) Is there any step of the argument where your projected utility function isn’t additive enough to prefer that a much smaller number of people suffer a little bit more?
Actually, I think your right. The escalation argument has caught me in a contradiction. I wonder why I didn’t see it last time around.
I still prefer the specs though. My prior in favor of the specs is strong enough that I have to conclude that there’s something wrong with the escalation argument that I’m not presently clever enough to find. It’s a bit like reading a proof that 2+2 = 5. You know you’ve just read a proof, and you checked each step, but you still, justifiably, don’t believe it. It’s far more likely that the proof fooled you in some subtle way than it is that arithmetic is actually inconsistent.
Well, we have better reasons to believe that arithmetic is consistent than we have to believe that human beings’ strong moral impulses are coherent in cases outside of everyday experience. I think much of the point of the SPECKS vs. TORTURE debate was to emphasize that our moral intuitions aren’t perceptions of a consistent world of values, but instead a thousand shards of moral desire which originated in a thousand different aspects of primate social life.
For one thing, our moral intuitions don’t shut up and multiply. When we start making decisions that affect large numbers of people (3^^^3 isn’t necessary; a million is enough to take us far outside of our usual domain), it’s important to be aware that the actual best action might sometimes trigger a wave of moral disgust, if the harm to a few seems more salient than the benefit to the many, etc.
Keep in mind that this isn’t arguing for implementing Utilitarianism of the “kill a healthy traveler and harvest his organs to save 10 other people” variety; among its faults, that kind of Utilitarianism fails to consider its probable consequences on human behavior if people know it’s being implemented. The circularity of “SPECKS” just serves to point out one more domain in which Eliezer’s Maxim applies:
This came to mind: What you intuitively believe about a certain statement may as well be described as an “emotion” of “truthiness”, triggered by the focus of attention holding the model just like any other emotion that values situations. Emotion isn’t always right, estimate of plausibility isn’t always right, but these are basically the same thing. I somehow used to separate them, along the line of probability-utility distinction, but this is probably more confusing then helpful a distinction, with truthiness on its own and the concept of emotions containing everything but it.
Yup. I get all that. I still want to go for the specs.
Perhaps it has to do with the fact that 3^^^3 is way more people than could possibly exist. Perhaps the specs v. torture hypothetical doesn’t actually matter. I don’t know. But I’m just not convinced.
Hello. I think the Escalation Argument can sometimes be found on the wrong side of Zeno’s Paradox. Say there is negative utility to both dust specks and torture, where dust specks have finite negative utility. Both dust specks and torture can be assigned to a ‘infliction of discomfort’ scale that corresponds to a segment of the real number line. At minimal torture, there is a singularity in the utility function—it goes to negative infinity.
At any point on the number line corresponding to an infliction of discomfort between dust specks and minimal torture, the utility is negative but finite. The Escalation Argument begins in the torture zone, and slowly diminishes the duration of the torture. I believe the argument breaks down when the infliction of discomfort is no longer torture. At that point, non-torture has higher utility than all preceding torture scenarios. If it’s always torture, then you never get to dust specks.
Then your utility function can no longer say 25 years of torture is preferable to 50 years. This difficulty is surmountable—I believe the original post had some discussion on hyperreal utilities and the like—but the scheme looks a little contrived to me.
To me, a utility function is a contrivance. So it’s OK if it’s contrived. It’s a map, not the territory, as illustrated above.
I take someone’s answer to this question at their word. When they say that no number of dust specks equals torture, I accept that as a datum for their utility function. The task is then to contrive a function which is consistent with that.
Orthonormal, you’re rehashing things I’ve covered in the post. Yes, many reasonable discounting methods (like exponential discounting in the “proximity argument”) do have a specific step where the derivative becomes negative.
What’s more, that fact doesn’t look especially unintuitive once you zoom in on it; do the math and see. For example, in the proximity argument the step involves the additional people suffering so far away from you that even an infinity of them sums up to less than e.g. one close relative of yours. Not so unrealistic for everyday humans, is it?
What’s more, that fact doesn’t look especially unintuitive once you zoom in on it; do the math and see. For example, in the proximity argument the step involves the additional people suffering so far away from you that even an infinity of them sums up to less than e.g. one close relative of yours. Not so unrealistic for everyday humans, is it?
It’s intuitive to me that everyday humans would do this, but not that it would be right.
This was confronted in the Escalation Argument. Would you prefer 1000 people being tortured for 49 years to 1 person being tortured for 50 years? (If you would, take 1000 to 1000000 and 49 to 49.99, etc.) Is there any step of the argument where your projected utility function isn’t additive enough to prefer that a much smaller number of people suffer a little bit more?
Actually, I think your right. The escalation argument has caught me in a contradiction. I wonder why I didn’t see it last time around.
I still prefer the specs though. My prior in favor of the specs is strong enough that I have to conclude that there’s something wrong with the escalation argument that I’m not presently clever enough to find. It’s a bit like reading a proof that 2+2 = 5. You know you’ve just read a proof, and you checked each step, but you still, justifiably, don’t believe it. It’s far more likely that the proof fooled you in some subtle way than it is that arithmetic is actually inconsistent.
Well, we have better reasons to believe that arithmetic is consistent than we have to believe that human beings’ strong moral impulses are coherent in cases outside of everyday experience. I think much of the point of the SPECKS vs. TORTURE debate was to emphasize that our moral intuitions aren’t perceptions of a consistent world of values, but instead a thousand shards of moral desire which originated in a thousand different aspects of primate social life.
For one thing, our moral intuitions don’t shut up and multiply. When we start making decisions that affect large numbers of people (3^^^3 isn’t necessary; a million is enough to take us far outside of our usual domain), it’s important to be aware that the actual best action might sometimes trigger a wave of moral disgust, if the harm to a few seems more salient than the benefit to the many, etc.
Keep in mind that this isn’t arguing for implementing Utilitarianism of the “kill a healthy traveler and harvest his organs to save 10 other people” variety; among its faults, that kind of Utilitarianism fails to consider its probable consequences on human behavior if people know it’s being implemented. The circularity of “SPECKS” just serves to point out one more domain in which Eliezer’s Maxim applies:
This came to mind: What you intuitively believe about a certain statement may as well be described as an “emotion” of “truthiness”, triggered by the focus of attention holding the model just like any other emotion that values situations. Emotion isn’t always right, estimate of plausibility isn’t always right, but these are basically the same thing. I somehow used to separate them, along the line of probability-utility distinction, but this is probably more confusing then helpful a distinction, with truthiness on its own and the concept of emotions containing everything but it.
Yup. I get all that. I still want to go for the specs.
Perhaps it has to do with the fact that 3^^^3 is way more people than could possibly exist. Perhaps the specs v. torture hypothetical doesn’t actually matter. I don’t know. But I’m just not convinced.
Just give up already! Intuition isn’t always right!
Hello. I think the Escalation Argument can sometimes be found on the wrong side of Zeno’s Paradox. Say there is negative utility to both dust specks and torture, where dust specks have finite negative utility. Both dust specks and torture can be assigned to a ‘infliction of discomfort’ scale that corresponds to a segment of the real number line. At minimal torture, there is a singularity in the utility function—it goes to negative infinity.
At any point on the number line corresponding to an infliction of discomfort between dust specks and minimal torture, the utility is negative but finite. The Escalation Argument begins in the torture zone, and slowly diminishes the duration of the torture. I believe the argument breaks down when the infliction of discomfort is no longer torture. At that point, non-torture has higher utility than all preceding torture scenarios. If it’s always torture, then you never get to dust specks.
Then your utility function can no longer say 25 years of torture is preferable to 50 years. This difficulty is surmountable—I believe the original post had some discussion on hyperreal utilities and the like—but the scheme looks a little contrived to me.
To me, a utility function is a contrivance. So it’s OK if it’s contrived. It’s a map, not the territory, as illustrated above.
I take someone’s answer to this question at their word. When they say that no number of dust specks equals torture, I accept that as a datum for their utility function. The task is then to contrive a function which is consistent with that.
Orthonormal, you’re rehashing things I’ve covered in the post. Yes, many reasonable discounting methods (like exponential discounting in the “proximity argument”) do have a specific step where the derivative becomes negative.
What’s more, that fact doesn’t look especially unintuitive once you zoom in on it; do the math and see. For example, in the proximity argument the step involves the additional people suffering so far away from you that even an infinity of them sums up to less than e.g. one close relative of yours. Not so unrealistic for everyday humans, is it?
It’s intuitive to me that everyday humans would do this, but not that it would be right.