(Small nitpicking: The pain from “a multiply-fractured leg” may bother you longer than “an hour of expertly applied torture”, but the general idea behind the scale is clear.)
If I have to choose between a million people getting 13 months’ torture and a million million million people getting 12 months’ torture, I pick the former.
In this case I’d choose as you do, just as in Jiro’s example:
3^^^3 people with a certain pain [versus] 1 person with a very slightly bigger pain.
The problem with these scenarios, however, is that they introduce a new factor: they’re comparing magnitudes of pain that are too close to each other. This not only applies to the amount of pain, but also to the amount of people:
10^12 stubbed toes aren’t as much worse than 10^6 stubbed toes as 10^6 stubbed toes are than 1.
I’d rather be tortured for 12 than 13 months if those were my only options, but after having had both experiences I would barely be able to tell the difference. If you want to pose this problem to someone with enough presence of mind to tell the difference, you’re no longer torturing humans.
(If psychological damage is cumulative, one month may or may not make the difference between PTSD and total lunacy. Of course, if at the end of the 12 months I’m informed that I still have one more month to go, then I will definitely care about the difference. But let’s assume a normal, continuous torture scenario, where I wouldn’t be able to keep track of time.)
This is why,
1 person getting 50 years’ torture is less bad than 10^6 people getting 49 years, which is less bad than 10^18 people getting 48 years, which is less bad than [… a million steps here …] which is less bad than [some gigantic number] getting stubbed toes.
runs into a Sorites problem that is more complex than EY’s blunt solution of nipping it at the bud.
In another thread (can’t locate it now), someone argued that moral considerations about the use of handguns were transparently applicable to the moral debate on nuclear weapons, and I didn’t know how to present the (to me) super-obvious case that nuclear weapons are on another moral plane entirely.
You could say my objection to your 50 Shades of Pain has to do with continuity and with the meaningfulness of a scale over very large numbers. Such a quantitative scale would necessarily include several qualitative transitions, and the absurd results of ignoring them are what happens when you try to translate a subjective, essentially incommunicable experience into a neat progression of numbers.
(You could remove that obstacle by asking self-aware robots to solve this thought experiment, and they would be able to give you a precise answer about which pain is numerically worse, but in that case the debate wouldn’t be relevant to us anymore.)
while indeed it would be nice to have a consistent and complete system of ethics that gives a definite answer in every case and never contradicts itself, in practice I bet I don’t.
The underlying assumptions behind this entire thought experiment are a moral theory that leads to not being able to choose between 2 persons being tortured for 25 years and 1 person being tortured for 50 years, which is regrettable, and a decision theory that leads to scenarios where small questions can quickly escalate to blackmailing and torture, which is appalling.
3^^^3 people with a certain pain [versus] 1 person with a very slightly bigger pain.
The problem with these scenarios, however, is that they introduce a new factor: they’re comparing magnitudes of pain that are too close to each other.
That was in response to your idea that small amounts of pain cannot be added up, but large amounts can.
If this is true, then there is a transition point where you go from “cannot be added up” to “can be added up”. Around that transition point, there are two pains that are close to each other yet differ in that only one of them can be added up. This leads to the absurd conclusion that you prefer lots of people with one pain to 1 person with the other, even though they are close to each other.
Saying “the trouble with this is that it compares magnitudes that are too close to each other” doesn’t resolve this problem, it helps create this problem. The problem depends on the fact that the two pains don’t differ in magnitude very much. Saying that these should be treated as not differing at all just accentuates that part, it doesn’t prevent there from being a problem.
I’m thinking of the type of scale where any two adjacent points are barely distinguishable but you see qualitative changes along the way; something like this.
In that case, you can’t even prefer one person with pain to 3^^^^3 people with the same pain.
(And if you say that you can’t add up sizes of pains, but you can add up “whether there is a pain”, the latter is all that is necessary for one of the problems to happen; exactly which problem happens depends on details such as whether you can do this for all sizes of pains or not.)
they’re comparing magnitudes of pain that are too close to each other.
Doesn’t that make the argument stronger? I mean, if you’re not even sure that 13 months of torture are much worse than 12 months of torture, then you should be pretty confident that 10^6 instances of 12 months’ torture are worse than 1 instance of 13 months’ torture, no?
Such a quantitative scale would necessarily include several qualitative transitions
So that was the option I described as “abandon continuity”. I was going to ask you to be more specific about where those qualitative transitions happen, but if I’m understanding you correctly I think your answer would be to say that the very question is misguided because there’s something ineffable about the experience of pain that makes it inappropriate to try to be quantitative about it, or something along those lines. So I’ll ask a different question: What do those qualitative transitions look like? What sort of difference is it that can occur between what look like two very, very closely spaced gradations of suffering, but that is so huge in its significance that it’s better for a billion people to suffer the less severe evil than for one person to suffer the more severe?
(You mention one possible example in passing: the transition from “PTSD” to “total lunacy”. But surely in practice this transition isn’t instantaneous. There are degrees of psychological screwed-up-ness in between “PTSD” and “total lunacy”, and there are degrees of probability of a given outcome, and what happens as you increase the amount of suffering is that the probabilities shift incrementally from each outcome to slightly worse ones; when the suffering is very slight and brief, the really bad outcomes are very unlikely; when it’s very severe and extended, the really bad outcomes are very likely. So is there, e.g., a quantitative leap in badness when the probability of being badly enough messed-up to commit suicide goes from 1% to 1.01%, or something?)
a moral theory that leads to not being able to choose between 2 persons being tortured for 25 years and 1 person being tortured for 50 years
If you mean that anyone here is assuming some kind of moral calculus where suffering is denominated in torture-years and is straightforwardly additive across people, I think that’s plainly wrong. On the other hand, if you mean that it should be absolutely obvious which of those two outcomes is worse … well, I’m not convinced, and I don’t think that’s because I have a perverted moral system, because it seems to me it’s not altogether obvious on any moral system and I don’t see why it should be.
a decision theory that leads to scenarios where small questions can quickly escalate to blackmailing and torture
(Small nitpicking: The pain from “a multiply-fractured leg” may bother you longer than “an hour of expertly applied torture”, but the general idea behind the scale is clear.)
In this case I’d choose as you do, just as in Jiro’s example:
The problem with these scenarios, however, is that they introduce a new factor: they’re comparing magnitudes of pain that are too close to each other. This not only applies to the amount of pain, but also to the amount of people:
I’d rather be tortured for 12 than 13 months if those were my only options, but after having had both experiences I would barely be able to tell the difference. If you want to pose this problem to someone with enough presence of mind to tell the difference, you’re no longer torturing humans.
(If psychological damage is cumulative, one month may or may not make the difference between PTSD and total lunacy. Of course, if at the end of the 12 months I’m informed that I still have one more month to go, then I will definitely care about the difference. But let’s assume a normal, continuous torture scenario, where I wouldn’t be able to keep track of time.)
This is why,
runs into a Sorites problem that is more complex than EY’s blunt solution of nipping it at the bud.
In another thread (can’t locate it now), someone argued that moral considerations about the use of handguns were transparently applicable to the moral debate on nuclear weapons, and I didn’t know how to present the (to me) super-obvious case that nuclear weapons are on another moral plane entirely.
You could say my objection to your 50 Shades of Pain has to do with continuity and with the meaningfulness of a scale over very large numbers. Such a quantitative scale would necessarily include several qualitative transitions, and the absurd results of ignoring them are what happens when you try to translate a subjective, essentially incommunicable experience into a neat progression of numbers.
(You could remove that obstacle by asking self-aware robots to solve this thought experiment, and they would be able to give you a precise answer about which pain is numerically worse, but in that case the debate wouldn’t be relevant to us anymore.)
The underlying assumptions behind this entire thought experiment are a moral theory that leads to not being able to choose between 2 persons being tortured for 25 years and 1 person being tortured for 50 years, which is regrettable, and a decision theory that leads to scenarios where small questions can quickly escalate to blackmailing and torture, which is appalling.
That was in response to your idea that small amounts of pain cannot be added up, but large amounts can.
If this is true, then there is a transition point where you go from “cannot be added up” to “can be added up”. Around that transition point, there are two pains that are close to each other yet differ in that only one of them can be added up. This leads to the absurd conclusion that you prefer lots of people with one pain to 1 person with the other, even though they are close to each other.
Saying “the trouble with this is that it compares magnitudes that are too close to each other” doesn’t resolve this problem, it helps create this problem. The problem depends on the fact that the two pains don’t differ in magnitude very much. Saying that these should be treated as not differing at all just accentuates that part, it doesn’t prevent there from being a problem.
I’m thinking of the type of scale where any two adjacent points are barely distinguishable but you see qualitative changes along the way; something like this.
That doesn’t solve the problem. The transition from “cannot be added up” to “can be added up” happens at two adjacent points.
As I don’t think pain can be expressed in numbers, I don’t think it can be added up, no matter its magnitude.
In that case, you can’t even prefer one person with pain to 3^^^^3 people with the same pain.
(And if you say that you can’t add up sizes of pains, but you can add up “whether there is a pain”, the latter is all that is necessary for one of the problems to happen; exactly which problem happens depends on details such as whether you can do this for all sizes of pains or not.)
Doesn’t that make the argument stronger? I mean, if you’re not even sure that 13 months of torture are much worse than 12 months of torture, then you should be pretty confident that 10^6 instances of 12 months’ torture are worse than 1 instance of 13 months’ torture, no?
So that was the option I described as “abandon continuity”. I was going to ask you to be more specific about where those qualitative transitions happen, but if I’m understanding you correctly I think your answer would be to say that the very question is misguided because there’s something ineffable about the experience of pain that makes it inappropriate to try to be quantitative about it, or something along those lines. So I’ll ask a different question: What do those qualitative transitions look like? What sort of difference is it that can occur between what look like two very, very closely spaced gradations of suffering, but that is so huge in its significance that it’s better for a billion people to suffer the less severe evil than for one person to suffer the more severe?
(You mention one possible example in passing: the transition from “PTSD” to “total lunacy”. But surely in practice this transition isn’t instantaneous. There are degrees of psychological screwed-up-ness in between “PTSD” and “total lunacy”, and there are degrees of probability of a given outcome, and what happens as you increase the amount of suffering is that the probabilities shift incrementally from each outcome to slightly worse ones; when the suffering is very slight and brief, the really bad outcomes are very unlikely; when it’s very severe and extended, the really bad outcomes are very likely. So is there, e.g., a quantitative leap in badness when the probability of being badly enough messed-up to commit suicide goes from 1% to 1.01%, or something?)
If you mean that anyone here is assuming some kind of moral calculus where suffering is denominated in torture-years and is straightforwardly additive across people, I think that’s plainly wrong. On the other hand, if you mean that it should be absolutely obvious which of those two outcomes is worse … well, I’m not convinced, and I don’t think that’s because I have a perverted moral system, because it seems to me it’s not altogether obvious on any moral system and I don’t see why it should be.
I’m not sure what you mean. Could you elaborate?