If you’re already taking “speck” to have that meaning, then your statement “Unless the pain is communicable (via hive mind or what have you), it will still be roundable to zero.” would no longer be true.
Granted. Let’s take an example of pain that would be decidedly not roundable to zero. Say, 3^^^3 paper cuts, with no further consequences. Still preferable to torture.
(What I’m about to say is I think the same as Jiro has been saying, but I have the impression that you aren’t quite responding to what I think Jiro has been saying. So either you’re misunderstanding Jiro, in which case another version of the argument might help, or I’m misunderstanding Jiro, in which case I’d be interested in your response to my comments as well as his/hers :-).)
It seems to me pretty obvious that one can construct a scale that goes something like this:
a stubbed toe
a paper cut
a painfully grazed knee
...
a broken ankle
a broken leg
a multiply-fractured leg
...
an hour of expertly applied torture
80 minutes of expertly applied torture
...
a year of expertly applied torture
13 months of expertly applied torture
...
49 years of expertly applied torture
50 years of expertly applied torture
with, say, at most a million steps on the scale from the stubbed toe to 50 years’ torture, and with the property that any reasonable person would prefer N people suffering problem n+1 to (let’s say) (1000N)^2 people suffering problem n. So, e.g. if I have to choose between a million people getting 13 months’ torture and a million million million people getting 12 months’ torture, I pick the former.
(Why not just say “would prefer 1 person suffering problem n+1 to 1000000 people suffering problem n”? Because you might take the view that large aggregates of people matter sublinearly, so that 10^12 stubbed toes aren’t as much worse than 10^6 stubbed toes as 10^6 stubbed toes are than 1. The particular choice of scaling in the previous paragraph is rather arbitrary.)
If so, then we can construct a chain: 1 person getting 50 years’ torture is less bad than 10^6 people getting 49 years, which is less bad than 10^18 people getting 48 years, which is less bad than [… a million steps here …] which is less bad than [some gigantic number] getting stubbed toes. That final gigantic number is a lot less than 3^^^3; if you replace (1000N)^2 with some faster-growing function of N then it might get bigger, but in any case it’s finite.
If you want to maintain that TORTURE is worse than SPECKS in view of this sort of argument, I think you need to do one of the following:
Abandon transitivity. “Yes, there’s a chain of worseness just as you describe, but that doesn’t mean that the endpoints compare the way you say.” (In that case: Why do you find that credible?)
Abandon scaling entirely, even for small differences. “No, 80 minutes’ torture for a million people isn’t actually much worse than 80 minutes’ torture for one person.” (In that case: Doesn’t that mean that as long as one person is suffering something bad, you don’t care whether any other person suffers something less bad? Isn’t that crazy?)
Abandon continuity. “No, you can’t construct that scale of suffering you described. Any chain of sufferings that starts with a stubbed toe and ends with 50 years’ torture must have at least one point in the middle where it makes an abrupt jump such that no amount of the less severe suffering can outweigh a single instance of the more severe.” (In that case: Can you point to a place where such a jump happens?)
Abandon scaling entirely for large numbers. “Any given suffering is much worse when it happens to a million people than to one, but there’s some N beyond which it makes no difference at all how many people it happens to.” (In that case: Why? You might e.g. appeal to the idea that beyond a certain number, some of the people are necessarily exact duplicates of one another.)
Abandon logic altogether: “Bah, you and your complicated arguments! I don’t care, I just know that TORTURE is worse than SPECKS.” (In that case: well, OK.)
Something else I haven’t thought of. (In that case: what?)
Incidentally, for my part I am uncertain about TORTURE versus SPECKS on two grounds. (1) I do think it’s possible that for really gigantic numbers of people badness stops depending on numbers, or starts depending only really really really weakly on numbers, so weakly that you need a lot more arrows to make a number large enough to compensate—precisely on the grounds that when the exact same life is duplicated many times its (dis)value might be a slowly growing function of the number of duplicates. (2) The question falls far outside the range of questions on which my moral intuitions are (so to speak) trained. I’ve never seriously encountered any case like it (with the outlandishly large numbers that are required to make it work), nor have any of my ancestors whose reproductive success indirectly shaped my brain. And, while indeed it would be nice to have a consistent and complete system of ethics that gives a definite answer in every case and never contradicts itself, in practice I bet I don’t. And cases like this I think it’s reasonable to mistrust both whatever answers emerge directly from my intuitions (SPECKS is better!) and the answers I get from far-out-of-context extrapolations of other intuitions (TORTURE is better!).
[EDITED immediately after posting, to fix a formatting screwup.]
(Small nitpicking: The pain from “a multiply-fractured leg” may bother you longer than “an hour of expertly applied torture”, but the general idea behind the scale is clear.)
If I have to choose between a million people getting 13 months’ torture and a million million million people getting 12 months’ torture, I pick the former.
In this case I’d choose as you do, just as in Jiro’s example:
3^^^3 people with a certain pain [versus] 1 person with a very slightly bigger pain.
The problem with these scenarios, however, is that they introduce a new factor: they’re comparing magnitudes of pain that are too close to each other. This not only applies to the amount of pain, but also to the amount of people:
10^12 stubbed toes aren’t as much worse than 10^6 stubbed toes as 10^6 stubbed toes are than 1.
I’d rather be tortured for 12 than 13 months if those were my only options, but after having had both experiences I would barely be able to tell the difference. If you want to pose this problem to someone with enough presence of mind to tell the difference, you’re no longer torturing humans.
(If psychological damage is cumulative, one month may or may not make the difference between PTSD and total lunacy. Of course, if at the end of the 12 months I’m informed that I still have one more month to go, then I will definitely care about the difference. But let’s assume a normal, continuous torture scenario, where I wouldn’t be able to keep track of time.)
This is why,
1 person getting 50 years’ torture is less bad than 10^6 people getting 49 years, which is less bad than 10^18 people getting 48 years, which is less bad than [… a million steps here …] which is less bad than [some gigantic number] getting stubbed toes.
runs into a Sorites problem that is more complex than EY’s blunt solution of nipping it at the bud.
In another thread (can’t locate it now), someone argued that moral considerations about the use of handguns were transparently applicable to the moral debate on nuclear weapons, and I didn’t know how to present the (to me) super-obvious case that nuclear weapons are on another moral plane entirely.
You could say my objection to your 50 Shades of Pain has to do with continuity and with the meaningfulness of a scale over very large numbers. Such a quantitative scale would necessarily include several qualitative transitions, and the absurd results of ignoring them are what happens when you try to translate a subjective, essentially incommunicable experience into a neat progression of numbers.
(You could remove that obstacle by asking self-aware robots to solve this thought experiment, and they would be able to give you a precise answer about which pain is numerically worse, but in that case the debate wouldn’t be relevant to us anymore.)
while indeed it would be nice to have a consistent and complete system of ethics that gives a definite answer in every case and never contradicts itself, in practice I bet I don’t.
The underlying assumptions behind this entire thought experiment are a moral theory that leads to not being able to choose between 2 persons being tortured for 25 years and 1 person being tortured for 50 years, which is regrettable, and a decision theory that leads to scenarios where small questions can quickly escalate to blackmailing and torture, which is appalling.
3^^^3 people with a certain pain [versus] 1 person with a very slightly bigger pain.
The problem with these scenarios, however, is that they introduce a new factor: they’re comparing magnitudes of pain that are too close to each other.
That was in response to your idea that small amounts of pain cannot be added up, but large amounts can.
If this is true, then there is a transition point where you go from “cannot be added up” to “can be added up”. Around that transition point, there are two pains that are close to each other yet differ in that only one of them can be added up. This leads to the absurd conclusion that you prefer lots of people with one pain to 1 person with the other, even though they are close to each other.
Saying “the trouble with this is that it compares magnitudes that are too close to each other” doesn’t resolve this problem, it helps create this problem. The problem depends on the fact that the two pains don’t differ in magnitude very much. Saying that these should be treated as not differing at all just accentuates that part, it doesn’t prevent there from being a problem.
I’m thinking of the type of scale where any two adjacent points are barely distinguishable but you see qualitative changes along the way; something like this.
In that case, you can’t even prefer one person with pain to 3^^^^3 people with the same pain.
(And if you say that you can’t add up sizes of pains, but you can add up “whether there is a pain”, the latter is all that is necessary for one of the problems to happen; exactly which problem happens depends on details such as whether you can do this for all sizes of pains or not.)
they’re comparing magnitudes of pain that are too close to each other.
Doesn’t that make the argument stronger? I mean, if you’re not even sure that 13 months of torture are much worse than 12 months of torture, then you should be pretty confident that 10^6 instances of 12 months’ torture are worse than 1 instance of 13 months’ torture, no?
Such a quantitative scale would necessarily include several qualitative transitions
So that was the option I described as “abandon continuity”. I was going to ask you to be more specific about where those qualitative transitions happen, but if I’m understanding you correctly I think your answer would be to say that the very question is misguided because there’s something ineffable about the experience of pain that makes it inappropriate to try to be quantitative about it, or something along those lines. So I’ll ask a different question: What do those qualitative transitions look like? What sort of difference is it that can occur between what look like two very, very closely spaced gradations of suffering, but that is so huge in its significance that it’s better for a billion people to suffer the less severe evil than for one person to suffer the more severe?
(You mention one possible example in passing: the transition from “PTSD” to “total lunacy”. But surely in practice this transition isn’t instantaneous. There are degrees of psychological screwed-up-ness in between “PTSD” and “total lunacy”, and there are degrees of probability of a given outcome, and what happens as you increase the amount of suffering is that the probabilities shift incrementally from each outcome to slightly worse ones; when the suffering is very slight and brief, the really bad outcomes are very unlikely; when it’s very severe and extended, the really bad outcomes are very likely. So is there, e.g., a quantitative leap in badness when the probability of being badly enough messed-up to commit suicide goes from 1% to 1.01%, or something?)
a moral theory that leads to not being able to choose between 2 persons being tortured for 25 years and 1 person being tortured for 50 years
If you mean that anyone here is assuming some kind of moral calculus where suffering is denominated in torture-years and is straightforwardly additive across people, I think that’s plainly wrong. On the other hand, if you mean that it should be absolutely obvious which of those two outcomes is worse … well, I’m not convinced, and I don’t think that’s because I have a perverted moral system, because it seems to me it’s not altogether obvious on any moral system and I don’t see why it should be.
a decision theory that leads to scenarios where small questions can quickly escalate to blackmailing and torture
Presumably, you still think that large amounts of pain can be added up.
In that case, that must have a threshold too; something that causes a certain amount of pain cannot be added up, while something that causes a very very slightly greater amount of pain can add up. That implies that you would prefer 3^^^3 people having pain at level 1 to one person having pain of level 1.00001, as long as 1 is not over the threshold for adding up but 1.00001 is. Are you willing to accept that conclusion?
(Incidentally, for a real world version, replace “torture” with “driving somewhere and accidentally running someone over with your car” and “specks” with “3^^^3 incidences of not being able to do something because you refuse to drive”. Do you still prefer specks to torture?)
That implies that you would prefer 3^^^3 people having pain at level 1 to one person having pain of level 1.00001, as long as 1 is not over the threshold for adding up but 1.00001 is.
As I stated before, doctors can’t agree on how to quantify pain, and I’m not going to attempt it either. This does not prevent us from comparing lesser and bigger pains, but there are no discrete “pain units” any more than there are utilons.
(Incidentally, for a real world version, replace “torture” with “driving somewhere and accidentally running someone over with your car” and “specks” with “3^^^3 incidences of not being able to do something because you refuse to drive”. Do you still prefer specks to torture?)
I would choose the certain risk of one traffic victim over 3^^^3 people unable to commute. But this example has a lot more ramifications than 3^^^3 specks. The lack of further consequences (and of aggregation capability) is what makes the specks preferable despite their magnitude. A more accurate comparison would be choosing between one traffic victim and 3^^^3 drivers annoyed by a paint scratch.
As I stated before, doctors can’t agree on how to quantify pain, and I’m not going to attempt it either. This does not prevent us from comparing lesser and bigger pains, but there are no discrete “pain units” any more than there are utilons.
If you can compare bigger and smaller pains, and if bigger pains can add and smaller pains cannot, you run into this problem. Whether you call one pain 1 and another 1.00001 or whether you just say “pain” and “very slightly bigger pain” is irrelevant—the question only depends on being able to compare them, which you already said you can do. What you say implies that you would prefer 3^^^3 people with a certain pain to 1 person with a very slightly bigger pain. Is this really what you want?
If you’re already taking “speck” to have that meaning, then your statement “Unless the pain is communicable (via hive mind or what have you), it will still be roundable to zero.” would no longer be true.
Granted. Let’s take an example of pain that would be decidedly not roundable to zero. Say, 3^^^3 paper cuts, with no further consequences. Still preferable to torture.
(What I’m about to say is I think the same as Jiro has been saying, but I have the impression that you aren’t quite responding to what I think Jiro has been saying. So either you’re misunderstanding Jiro, in which case another version of the argument might help, or I’m misunderstanding Jiro, in which case I’d be interested in your response to my comments as well as his/hers :-).)
It seems to me pretty obvious that one can construct a scale that goes something like this:
a stubbed toe
a paper cut
a painfully grazed knee
...
a broken ankle
a broken leg
a multiply-fractured leg
...
an hour of expertly applied torture
80 minutes of expertly applied torture
...
a year of expertly applied torture
13 months of expertly applied torture
...
49 years of expertly applied torture
50 years of expertly applied torture
with, say, at most a million steps on the scale from the stubbed toe to 50 years’ torture, and with the property that any reasonable person would prefer N people suffering problem n+1 to (let’s say) (1000N)^2 people suffering problem n. So, e.g. if I have to choose between a million people getting 13 months’ torture and a million million million people getting 12 months’ torture, I pick the former.
(Why not just say “would prefer 1 person suffering problem n+1 to 1000000 people suffering problem n”? Because you might take the view that large aggregates of people matter sublinearly, so that 10^12 stubbed toes aren’t as much worse than 10^6 stubbed toes as 10^6 stubbed toes are than 1. The particular choice of scaling in the previous paragraph is rather arbitrary.)
If so, then we can construct a chain: 1 person getting 50 years’ torture is less bad than 10^6 people getting 49 years, which is less bad than 10^18 people getting 48 years, which is less bad than [… a million steps here …] which is less bad than [some gigantic number] getting stubbed toes. That final gigantic number is a lot less than 3^^^3; if you replace (1000N)^2 with some faster-growing function of N then it might get bigger, but in any case it’s finite.
If you want to maintain that TORTURE is worse than SPECKS in view of this sort of argument, I think you need to do one of the following:
Abandon transitivity. “Yes, there’s a chain of worseness just as you describe, but that doesn’t mean that the endpoints compare the way you say.” (In that case: Why do you find that credible?)
Abandon scaling entirely, even for small differences. “No, 80 minutes’ torture for a million people isn’t actually much worse than 80 minutes’ torture for one person.” (In that case: Doesn’t that mean that as long as one person is suffering something bad, you don’t care whether any other person suffers something less bad? Isn’t that crazy?)
Abandon continuity. “No, you can’t construct that scale of suffering you described. Any chain of sufferings that starts with a stubbed toe and ends with 50 years’ torture must have at least one point in the middle where it makes an abrupt jump such that no amount of the less severe suffering can outweigh a single instance of the more severe.” (In that case: Can you point to a place where such a jump happens?)
Abandon scaling entirely for large numbers. “Any given suffering is much worse when it happens to a million people than to one, but there’s some N beyond which it makes no difference at all how many people it happens to.” (In that case: Why? You might e.g. appeal to the idea that beyond a certain number, some of the people are necessarily exact duplicates of one another.)
Abandon logic altogether: “Bah, you and your complicated arguments! I don’t care, I just know that TORTURE is worse than SPECKS.” (In that case: well, OK.)
Something else I haven’t thought of. (In that case: what?)
Incidentally, for my part I am uncertain about TORTURE versus SPECKS on two grounds. (1) I do think it’s possible that for really gigantic numbers of people badness stops depending on numbers, or starts depending only really really really weakly on numbers, so weakly that you need a lot more arrows to make a number large enough to compensate—precisely on the grounds that when the exact same life is duplicated many times its (dis)value might be a slowly growing function of the number of duplicates. (2) The question falls far outside the range of questions on which my moral intuitions are (so to speak) trained. I’ve never seriously encountered any case like it (with the outlandishly large numbers that are required to make it work), nor have any of my ancestors whose reproductive success indirectly shaped my brain. And, while indeed it would be nice to have a consistent and complete system of ethics that gives a definite answer in every case and never contradicts itself, in practice I bet I don’t. And cases like this I think it’s reasonable to mistrust both whatever answers emerge directly from my intuitions (SPECKS is better!) and the answers I get from far-out-of-context extrapolations of other intuitions (TORTURE is better!).
[EDITED immediately after posting, to fix a formatting screwup.]
(Small nitpicking: The pain from “a multiply-fractured leg” may bother you longer than “an hour of expertly applied torture”, but the general idea behind the scale is clear.)
In this case I’d choose as you do, just as in Jiro’s example:
The problem with these scenarios, however, is that they introduce a new factor: they’re comparing magnitudes of pain that are too close to each other. This not only applies to the amount of pain, but also to the amount of people:
I’d rather be tortured for 12 than 13 months if those were my only options, but after having had both experiences I would barely be able to tell the difference. If you want to pose this problem to someone with enough presence of mind to tell the difference, you’re no longer torturing humans.
(If psychological damage is cumulative, one month may or may not make the difference between PTSD and total lunacy. Of course, if at the end of the 12 months I’m informed that I still have one more month to go, then I will definitely care about the difference. But let’s assume a normal, continuous torture scenario, where I wouldn’t be able to keep track of time.)
This is why,
runs into a Sorites problem that is more complex than EY’s blunt solution of nipping it at the bud.
In another thread (can’t locate it now), someone argued that moral considerations about the use of handguns were transparently applicable to the moral debate on nuclear weapons, and I didn’t know how to present the (to me) super-obvious case that nuclear weapons are on another moral plane entirely.
You could say my objection to your 50 Shades of Pain has to do with continuity and with the meaningfulness of a scale over very large numbers. Such a quantitative scale would necessarily include several qualitative transitions, and the absurd results of ignoring them are what happens when you try to translate a subjective, essentially incommunicable experience into a neat progression of numbers.
(You could remove that obstacle by asking self-aware robots to solve this thought experiment, and they would be able to give you a precise answer about which pain is numerically worse, but in that case the debate wouldn’t be relevant to us anymore.)
The underlying assumptions behind this entire thought experiment are a moral theory that leads to not being able to choose between 2 persons being tortured for 25 years and 1 person being tortured for 50 years, which is regrettable, and a decision theory that leads to scenarios where small questions can quickly escalate to blackmailing and torture, which is appalling.
That was in response to your idea that small amounts of pain cannot be added up, but large amounts can.
If this is true, then there is a transition point where you go from “cannot be added up” to “can be added up”. Around that transition point, there are two pains that are close to each other yet differ in that only one of them can be added up. This leads to the absurd conclusion that you prefer lots of people with one pain to 1 person with the other, even though they are close to each other.
Saying “the trouble with this is that it compares magnitudes that are too close to each other” doesn’t resolve this problem, it helps create this problem. The problem depends on the fact that the two pains don’t differ in magnitude very much. Saying that these should be treated as not differing at all just accentuates that part, it doesn’t prevent there from being a problem.
I’m thinking of the type of scale where any two adjacent points are barely distinguishable but you see qualitative changes along the way; something like this.
That doesn’t solve the problem. The transition from “cannot be added up” to “can be added up” happens at two adjacent points.
As I don’t think pain can be expressed in numbers, I don’t think it can be added up, no matter its magnitude.
In that case, you can’t even prefer one person with pain to 3^^^^3 people with the same pain.
(And if you say that you can’t add up sizes of pains, but you can add up “whether there is a pain”, the latter is all that is necessary for one of the problems to happen; exactly which problem happens depends on details such as whether you can do this for all sizes of pains or not.)
Doesn’t that make the argument stronger? I mean, if you’re not even sure that 13 months of torture are much worse than 12 months of torture, then you should be pretty confident that 10^6 instances of 12 months’ torture are worse than 1 instance of 13 months’ torture, no?
So that was the option I described as “abandon continuity”. I was going to ask you to be more specific about where those qualitative transitions happen, but if I’m understanding you correctly I think your answer would be to say that the very question is misguided because there’s something ineffable about the experience of pain that makes it inappropriate to try to be quantitative about it, or something along those lines. So I’ll ask a different question: What do those qualitative transitions look like? What sort of difference is it that can occur between what look like two very, very closely spaced gradations of suffering, but that is so huge in its significance that it’s better for a billion people to suffer the less severe evil than for one person to suffer the more severe?
(You mention one possible example in passing: the transition from “PTSD” to “total lunacy”. But surely in practice this transition isn’t instantaneous. There are degrees of psychological screwed-up-ness in between “PTSD” and “total lunacy”, and there are degrees of probability of a given outcome, and what happens as you increase the amount of suffering is that the probabilities shift incrementally from each outcome to slightly worse ones; when the suffering is very slight and brief, the really bad outcomes are very unlikely; when it’s very severe and extended, the really bad outcomes are very likely. So is there, e.g., a quantitative leap in badness when the probability of being badly enough messed-up to commit suicide goes from 1% to 1.01%, or something?)
If you mean that anyone here is assuming some kind of moral calculus where suffering is denominated in torture-years and is straightforwardly additive across people, I think that’s plainly wrong. On the other hand, if you mean that it should be absolutely obvious which of those two outcomes is worse … well, I’m not convinced, and I don’t think that’s because I have a perverted moral system, because it seems to me it’s not altogether obvious on any moral system and I don’t see why it should be.
I’m not sure what you mean. Could you elaborate?
Presumably, you still think that large amounts of pain can be added up.
In that case, that must have a threshold too; something that causes a certain amount of pain cannot be added up, while something that causes a very very slightly greater amount of pain can add up. That implies that you would prefer 3^^^3 people having pain at level 1 to one person having pain of level 1.00001, as long as 1 is not over the threshold for adding up but 1.00001 is. Are you willing to accept that conclusion?
(Incidentally, for a real world version, replace “torture” with “driving somewhere and accidentally running someone over with your car” and “specks” with “3^^^3 incidences of not being able to do something because you refuse to drive”. Do you still prefer specks to torture?)
As I stated before, doctors can’t agree on how to quantify pain, and I’m not going to attempt it either. This does not prevent us from comparing lesser and bigger pains, but there are no discrete “pain units” any more than there are utilons.
I would choose the certain risk of one traffic victim over 3^^^3 people unable to commute. But this example has a lot more ramifications than 3^^^3 specks. The lack of further consequences (and of aggregation capability) is what makes the specks preferable despite their magnitude. A more accurate comparison would be choosing between one traffic victim and 3^^^3 drivers annoyed by a paint scratch.
If you can compare bigger and smaller pains, and if bigger pains can add and smaller pains cannot, you run into this problem. Whether you call one pain 1 and another 1.00001 or whether you just say “pain” and “very slightly bigger pain” is irrelevant—the question only depends on being able to compare them, which you already said you can do. What you say implies that you would prefer 3^^^3 people with a certain pain to 1 person with a very slightly bigger pain. Is this really what you want?
Please see my recent reply here.