This is called the “proximity argument” in the post.
I’ve no idea how we’re managing to have this discussion under a deleted submission. It shouldn’t have even been posted to LW! It was live for about 30 seconds until I realized I clicked the wrong button.
It’s in the feed now, and everyone subscribed will see it. You can not unpublish on the Internet! Can you somehow “undelete” it, I think it’s fine enough a post.
What smoofra said (although I would reverse the signs and assign torture and dust specks negative utility). Say there is a singularity in the utility function for torture (goes to negative infinity). The utility of many dust specks (finite negative) cannot add up to the utility for torture.
If the utility function for torture were negative infinity:
any choice with a nonzero probability of leading to torture gains infinite disutility,
any torture of any duration has the same disutility—infinite,
the criteria for torture vs. non-torture become rigid—something which is almost torture is literally infinitely better than something which is barely torture,
et cetera.
In other words, I don’t think this is a rational moral stance.
RobinZ, perhaps my understanding of the term utility differs from yours. In finance & economics, utility is a scalar (i.e., a real number) function u of wealth w, subject to:
u(w) is non-decreasing;
u(w) is concave downward.
(Negative) singularities to the left are admissable.
I confess I don’t know about the history of how the utility concept has been generalized to encompass pain and pleasure. It seems a multi-valued utility function might work better than a scalar function.
The criteria you mention don’t exclude a negative singularity to the left, but when you attempt to optimize for maximum utility, the singularity causes problems. I was describing a few.
Edit: I mean to say: in the utilitarianism-utility function, which has multiple inputs.
I can envision a vector utility function u(x) = (a, b), where the ordering is on the first term a, unless there is a tie at negative infinity; in that case the ordering is on the second term b. b is −1 for one person-hour of minimal torture, and it’s multiplicative in persons, duration and severity >= 1. (Pain infliction of less than 1 times minimal torture severity is not considered torture.) This solves your second objection, and the other two are features of this ‘Just say no to torture’ utility function.
Quote:
-any choice with a nonzero probability of leading to torture gains infinite disutility,
-any torture of any duration has the same disutility—infinite,
-the criteria for torture vs. non-torture become rigid—something which is almost torture is literally infinitely better than something which is barely torture,
But every choice has a nonzero probability of leading to torture. Your proposed moral stance amounts to “minimize the probability-times-intensity of torture”, to which a reasonable answer might be, “set off a nuclear holocaust annihilating all life on the planet”.
(And the distinction between torture and non-torture is—at least in the abstract—fuzzy. How much pain does it have to be to be torture?)
There is nothing you can do that makes it impossible that there will be torture. Therefore, every choice has a nonzero probability of being followed by torture. I’m not sure whether “leading to torture” is the best way to phrase this, though.
What he said. Also, if you are evaluating the rectitude of each possible choice by its consequences (i.e. using your utility function), it doesn’t matter if you actually (might) cause the torture or if it just (possibly) occurs within your light cone—you have to count it.
I believe you should count choices that can measurably change the probability of torture. If you can’t measure a change in the probability of torture, you should count that as no change. I believe this view more closely corresponds to current physical models than the infinite butterflies concept.
But if torture has infinite weight, even any change—even one too small to measure—has either infinite utility or infinite disutility. Which makes the situation even worse.
Anyway, I’m not arguing that you should measure it this way, I’m arguing that you don’t. Mathematically, the implications of your proposal do not correspond to the value judgements you endorse, and therefore the proposal doesn’t correspond to your actual algorithm, and should be abandoned.
Changes that are small enough to be beyond Heisenberg’s epistemological barrier cannot in principle be shown to exist. So, they acquire Easter Bunny-like status.
Changes that are within this barrier but beyond my measurement capabilities aren’t known to me; and, utility is an epistemological function. I can’t measure it, so I can’t know about it, so it doesn’t enter into my utility.
I think a bigger problem is the question of enduring a split second of torture in exchange for a huge social good. This sort of thing is ruled out by that utility function.
But that’s ridiculous. I would gladly exchange being tortured for a few seconds—say, waterboarding, like Christopher Hitchens suffered—for, say, an end to starvation worldwide!
More to the point, deleting infinities from your equations works sometimes—I’ve heard of it being done in quantum mechanics—but doing so with the noisy filter of your personal ignorance, or even the less-noisy filter of theoretical detectability, leaves wide open the possibility of inconsistencies in your system. It’s just not what a consistent moral framework looks like.
A utility function is just a way of describing the ranking of desirability of scenarios. I’m not convinced that singularities on the left can’t be a part of that description.
Singularities on the left I can’t rule out universally, but setting the utility of torture to negative infinity … well, I’ve told you my reasons for objecting. If you want me to spend more time elaborating, let me know; for my own part, I’m done.
There is no “Heisenberg’s epistemological barier”. Utility function is defined on everything that could possibly be, whether you know specific possibilities to be real or don’t. You are supposed to average over the set of possibilities that you can’t distinguish because of limited knowledge.
Everyone has their own utility function (whether they’re honest about it or not), I suppose. Personally, I would never try to place myself in the shoes of Laplace’s Demon. They’re probably those felt pointy jester shoes with the bells on the end.
If I am to choose between getting a glass of water or a cup of coffee, I am quite confident that neither choice will lead to torture. You certainly cannot prove that either choice will lead to torture. Absolute certainty has nothing to do with it, in my opinion.
This was confronted in the Escalation Argument. Would you prefer 1000 people being tortured for 49 years to 1 person being tortured for 50 years? (If you would, take 1000 to 1000000 and 49 to 49.99, etc.) Is there any step of the argument where your projected utility function isn’t additive enough to prefer that a much smaller number of people suffer a little bit more?
Actually, I think your right. The escalation argument has caught me in a contradiction. I wonder why I didn’t see it last time around.
I still prefer the specs though. My prior in favor of the specs is strong enough that I have to conclude that there’s something wrong with the escalation argument that I’m not presently clever enough to find. It’s a bit like reading a proof that 2+2 = 5. You know you’ve just read a proof, and you checked each step, but you still, justifiably, don’t believe it. It’s far more likely that the proof fooled you in some subtle way than it is that arithmetic is actually inconsistent.
Well, we have better reasons to believe that arithmetic is consistent than we have to believe that human beings’ strong moral impulses are coherent in cases outside of everyday experience. I think much of the point of the SPECKS vs. TORTURE debate was to emphasize that our moral intuitions aren’t perceptions of a consistent world of values, but instead a thousand shards of moral desire which originated in a thousand different aspects of primate social life.
For one thing, our moral intuitions don’t shut up and multiply. When we start making decisions that affect large numbers of people (3^^^3 isn’t necessary; a million is enough to take us far outside of our usual domain), it’s important to be aware that the actual best action might sometimes trigger a wave of moral disgust, if the harm to a few seems more salient than the benefit to the many, etc.
Keep in mind that this isn’t arguing for implementing Utilitarianism of the “kill a healthy traveler and harvest his organs to save 10 other people” variety; among its faults, that kind of Utilitarianism fails to consider its probable consequences on human behavior if people know it’s being implemented. The circularity of “SPECKS” just serves to point out one more domain in which Eliezer’s Maxim applies:
This came to mind: What you intuitively believe about a certain statement may as well be described as an “emotion” of “truthiness”, triggered by the focus of attention holding the model just like any other emotion that values situations. Emotion isn’t always right, estimate of plausibility isn’t always right, but these are basically the same thing. I somehow used to separate them, along the line of probability-utility distinction, but this is probably more confusing then helpful a distinction, with truthiness on its own and the concept of emotions containing everything but it.
Yup. I get all that. I still want to go for the specs.
Perhaps it has to do with the fact that 3^^^3 is way more people than could possibly exist. Perhaps the specs v. torture hypothetical doesn’t actually matter. I don’t know. But I’m just not convinced.
Hello. I think the Escalation Argument can sometimes be found on the wrong side of Zeno’s Paradox. Say there is negative utility to both dust specks and torture, where dust specks have finite negative utility. Both dust specks and torture can be assigned to a ‘infliction of discomfort’ scale that corresponds to a segment of the real number line. At minimal torture, there is a singularity in the utility function—it goes to negative infinity.
At any point on the number line corresponding to an infliction of discomfort between dust specks and minimal torture, the utility is negative but finite. The Escalation Argument begins in the torture zone, and slowly diminishes the duration of the torture. I believe the argument breaks down when the infliction of discomfort is no longer torture. At that point, non-torture has higher utility than all preceding torture scenarios. If it’s always torture, then you never get to dust specks.
Then your utility function can no longer say 25 years of torture is preferable to 50 years. This difficulty is surmountable—I believe the original post had some discussion on hyperreal utilities and the like—but the scheme looks a little contrived to me.
To me, a utility function is a contrivance. So it’s OK if it’s contrived. It’s a map, not the territory, as illustrated above.
I take someone’s answer to this question at their word. When they say that no number of dust specks equals torture, I accept that as a datum for their utility function. The task is then to contrive a function which is consistent with that.
Orthonormal, you’re rehashing things I’ve covered in the post. Yes, many reasonable discounting methods (like exponential discounting in the “proximity argument”) do have a specific step where the derivative becomes negative.
What’s more, that fact doesn’t look especially unintuitive once you zoom in on it; do the math and see. For example, in the proximity argument the step involves the additional people suffering so far away from you that even an infinity of them sums up to less than e.g. one close relative of yours. Not so unrealistic for everyday humans, is it?
What’s more, that fact doesn’t look especially unintuitive once you zoom in on it; do the math and see. For example, in the proximity argument the step involves the additional people suffering so far away from you that even an infinity of them sums up to less than e.g. one close relative of yours. Not so unrealistic for everyday humans, is it?
It’s intuitive to me that everyday humans would do this, but not that it would be right.
the right answer is |U(3^^^3 + 1dustspecs) - U(3^^^3 dustspecs)| < |U(1 dustspec) - U(0 dustspecs)|, and U(any number of dustspecs) < U(torture)
There is no additivity axiom for utility.
This is called the “proximity argument” in the post.
I’ve no idea how we’re managing to have this discussion under a deleted submission. It shouldn’t have even been posted to LW! It was live for about 30 seconds until I realized I clicked the wrong button.
It’s in the feed now, and everyone subscribed will see it. You can not unpublish on the Internet! Can you somehow “undelete” it, I think it’s fine enough a post.
Nope, I just tried pushing some buttons (edit, save, submit etc.) and it didn’t work. Oh, boy. I created a secret area on LW!
Hmm. That should probably be posted to Known Issues...
What smoofra said (although I would reverse the signs and assign torture and dust specks negative utility). Say there is a singularity in the utility function for torture (goes to negative infinity). The utility of many dust specks (finite negative) cannot add up to the utility for torture.
If the utility function for torture were negative infinity:
any choice with a nonzero probability of leading to torture gains infinite disutility,
any torture of any duration has the same disutility—infinite,
the criteria for torture vs. non-torture become rigid—something which is almost torture is literally infinitely better than something which is barely torture,
et cetera.
In other words, I don’t think this is a rational moral stance.
RobinZ, perhaps my understanding of the term utility differs from yours. In finance & economics, utility is a scalar (i.e., a real number) function u of wealth w, subject to:
u(w) is non-decreasing; u(w) is concave downward.
(Negative) singularities to the left are admissable.
I confess I don’t know about the history of how the utility concept has been generalized to encompass pain and pleasure. It seems a multi-valued utility function might work better than a scalar function.
The criteria you mention don’t exclude a negative singularity to the left, but when you attempt to optimize for maximum utility, the singularity causes problems. I was describing a few.
Edit: I mean to say: in the utilitarianism-utility function, which has multiple inputs.
I can envision a vector utility function u(x) = (a, b), where the ordering is on the first term a, unless there is a tie at negative infinity; in that case the ordering is on the second term b. b is −1 for one person-hour of minimal torture, and it’s multiplicative in persons, duration and severity >= 1. (Pain infliction of less than 1 times minimal torture severity is not considered torture.) This solves your second objection, and the other two are features of this ‘Just say no to torture’ utility function.
Quote: -any choice with a nonzero probability of leading to torture gains infinite disutility, -any torture of any duration has the same disutility—infinite, -the criteria for torture vs. non-torture become rigid—something which is almost torture is literally infinitely better than something which is barely torture,
But every choice has a nonzero probability of leading to torture. Your proposed moral stance amounts to “minimize the probability-times-intensity of torture”, to which a reasonable answer might be, “set off a nuclear holocaust annihilating all life on the planet”.
(And the distinction between torture and non-torture is—at least in the abstract—fuzzy. How much pain does it have to be to be torture?)
In real life or in this example? I don’t believe this is true in real life.
There is nothing you can do that makes it impossible that there will be torture. Therefore, every choice has a nonzero probability of being followed by torture. I’m not sure whether “leading to torture” is the best way to phrase this, though.
What he said. Also, if you are evaluating the rectitude of each possible choice by its consequences (i.e. using your utility function), it doesn’t matter if you actually (might) cause the torture or if it just (possibly) occurs within your light cone—you have to count it.
Are you referring to me? I’m a she.
headdesk
What Alicorn said, yes. Damnit, I thought I was doing pretty good at avoiding the pronoun problems...
Don’t worry about it. It was a safe bet, if you don’t know who I am and this is the context you have to work with ;)
Hey, don’t tell me what I’m not allowed to worry about! :P
(...geez, I feel like I’m about to be deleted as natter...)
I believe you should count choices that can measurably change the probability of torture. If you can’t measure a change in the probability of torture, you should count that as no change. I believe this view more closely corresponds to current physical models than the infinite butterflies concept.
But if torture has infinite weight, even any change—even one too small to measure—has either infinite utility or infinite disutility. Which makes the situation even worse.
Anyway, I’m not arguing that you should measure it this way, I’m arguing that you don’t. Mathematically, the implications of your proposal do not correspond to the value judgements you endorse, and therefore the proposal doesn’t correspond to your actual algorithm, and should be abandoned.
Changes that are small enough to be beyond Heisenberg’s epistemological barrier cannot in principle be shown to exist. So, they acquire Easter Bunny-like status.
Changes that are within this barrier but beyond my measurement capabilities aren’t known to me; and, utility is an epistemological function. I can’t measure it, so I can’t know about it, so it doesn’t enter into my utility.
I think a bigger problem is the question of enduring a split second of torture in exchange for a huge social good. This sort of thing is ruled out by that utility function.
But that’s ridiculous. I would gladly exchange being tortured for a few seconds—say, waterboarding, like Christopher Hitchens suffered—for, say, an end to starvation worldwide!
More to the point, deleting infinities from your equations works sometimes—I’ve heard of it being done in quantum mechanics—but doing so with the noisy filter of your personal ignorance, or even the less-noisy filter of theoretical detectability, leaves wide open the possibility of inconsistencies in your system. It’s just not what a consistent moral framework looks like.
I agree about the torture for a few seconds.
A utility function is just a way of describing the ranking of desirability of scenarios. I’m not convinced that singularities on the left can’t be a part of that description.
Singularities on the left I can’t rule out universally, but setting the utility of torture to negative infinity … well, I’ve told you my reasons for objecting. If you want me to spend more time elaborating, let me know; for my own part, I’m done.
There is no “Heisenberg’s epistemological barier”. Utility function is defined on everything that could possibly be, whether you know specific possibilities to be real or don’t. You are supposed to average over the set of possibilities that you can’t distinguish because of limited knowledge.
The equation involving Planck’s constant in the following link is not in dispute, and that equation does constitute an epistemological barrier:
http://en.wikipedia.org/wiki/Uncertainty_principle
Everyone has their own utility function (whether they’re honest about it or not), I suppose. Personally, I would never try to place myself in the shoes of Laplace’s Demon. They’re probably those felt pointy jester shoes with the bells on the end.
See Absolute certainty.
Proof left to the reader?
If I am to choose between getting a glass of water or a cup of coffee, I am quite confident that neither choice will lead to torture. You certainly cannot prove that either choice will lead to torture. Absolute certainty has nothing to do with it, in my opinion.
You either have absolute certainty in the statement that neither choice will lead to torture, or you allow some probability of it being incorrect.
This was confronted in the Escalation Argument. Would you prefer 1000 people being tortured for 49 years to 1 person being tortured for 50 years? (If you would, take 1000 to 1000000 and 49 to 49.99, etc.) Is there any step of the argument where your projected utility function isn’t additive enough to prefer that a much smaller number of people suffer a little bit more?
Actually, I think your right. The escalation argument has caught me in a contradiction. I wonder why I didn’t see it last time around.
I still prefer the specs though. My prior in favor of the specs is strong enough that I have to conclude that there’s something wrong with the escalation argument that I’m not presently clever enough to find. It’s a bit like reading a proof that 2+2 = 5. You know you’ve just read a proof, and you checked each step, but you still, justifiably, don’t believe it. It’s far more likely that the proof fooled you in some subtle way than it is that arithmetic is actually inconsistent.
Well, we have better reasons to believe that arithmetic is consistent than we have to believe that human beings’ strong moral impulses are coherent in cases outside of everyday experience. I think much of the point of the SPECKS vs. TORTURE debate was to emphasize that our moral intuitions aren’t perceptions of a consistent world of values, but instead a thousand shards of moral desire which originated in a thousand different aspects of primate social life.
For one thing, our moral intuitions don’t shut up and multiply. When we start making decisions that affect large numbers of people (3^^^3 isn’t necessary; a million is enough to take us far outside of our usual domain), it’s important to be aware that the actual best action might sometimes trigger a wave of moral disgust, if the harm to a few seems more salient than the benefit to the many, etc.
Keep in mind that this isn’t arguing for implementing Utilitarianism of the “kill a healthy traveler and harvest his organs to save 10 other people” variety; among its faults, that kind of Utilitarianism fails to consider its probable consequences on human behavior if people know it’s being implemented. The circularity of “SPECKS” just serves to point out one more domain in which Eliezer’s Maxim applies:
This came to mind: What you intuitively believe about a certain statement may as well be described as an “emotion” of “truthiness”, triggered by the focus of attention holding the model just like any other emotion that values situations. Emotion isn’t always right, estimate of plausibility isn’t always right, but these are basically the same thing. I somehow used to separate them, along the line of probability-utility distinction, but this is probably more confusing then helpful a distinction, with truthiness on its own and the concept of emotions containing everything but it.
Yup. I get all that. I still want to go for the specs.
Perhaps it has to do with the fact that 3^^^3 is way more people than could possibly exist. Perhaps the specs v. torture hypothetical doesn’t actually matter. I don’t know. But I’m just not convinced.
Just give up already! Intuition isn’t always right!
Hello. I think the Escalation Argument can sometimes be found on the wrong side of Zeno’s Paradox. Say there is negative utility to both dust specks and torture, where dust specks have finite negative utility. Both dust specks and torture can be assigned to a ‘infliction of discomfort’ scale that corresponds to a segment of the real number line. At minimal torture, there is a singularity in the utility function—it goes to negative infinity.
At any point on the number line corresponding to an infliction of discomfort between dust specks and minimal torture, the utility is negative but finite. The Escalation Argument begins in the torture zone, and slowly diminishes the duration of the torture. I believe the argument breaks down when the infliction of discomfort is no longer torture. At that point, non-torture has higher utility than all preceding torture scenarios. If it’s always torture, then you never get to dust specks.
Then your utility function can no longer say 25 years of torture is preferable to 50 years. This difficulty is surmountable—I believe the original post had some discussion on hyperreal utilities and the like—but the scheme looks a little contrived to me.
To me, a utility function is a contrivance. So it’s OK if it’s contrived. It’s a map, not the territory, as illustrated above.
I take someone’s answer to this question at their word. When they say that no number of dust specks equals torture, I accept that as a datum for their utility function. The task is then to contrive a function which is consistent with that.
Orthonormal, you’re rehashing things I’ve covered in the post. Yes, many reasonable discounting methods (like exponential discounting in the “proximity argument”) do have a specific step where the derivative becomes negative.
What’s more, that fact doesn’t look especially unintuitive once you zoom in on it; do the math and see. For example, in the proximity argument the step involves the additional people suffering so far away from you that even an infinity of them sums up to less than e.g. one close relative of yours. Not so unrealistic for everyday humans, is it?
It’s intuitive to me that everyday humans would do this, but not that it would be right.