Presumably, there’s going to be some variation with how the people are feeling. Given 3^^^3 people, this will mean that I can pretty much find someone under any given amount of pleasure/pain.
...
It’s the same numbers both ways—just different people. The only way you could decide which is better is if you care more or less than average about Alice.
If Yudkowsky had set up his thought experiment in this way, I would agree with him. But I don’t believe there’s any reason to expect there to be a distribution of pain in the way that you describe—or in any case it seems like Yudkowsky’s point should generalise, and I’m not sure that it does.
If all 3^^^3 + 1 people are on the pain level of 0, and then I have the choice of bringing them all up to pain level 1 or leaving 3^^^3 of them on pain level 0 and bringing one of them up to pain level 1,000,000,000,000 - I would choose the former.
I may have increased the number of pain units in existence, but my value computation doesn’t work by adding up “pain units”. I’m almost entirely unconcerned about 3^^^3 people experiencing pain level 1; they haven’t reached my threshold for caring about the pain they are experiencing. On the other hand, the individual being tortured is way above this threshold and so I do care about him.
I don’t know where the threshold(s) are, but I’m sure that if my brain was examined closely there would be some arbitrary points at which it decides that someone else’s pain level has become intolerable. Since these jumps are arbitrary, this would seem to break the idea that “pain units” are additive.
Is the distribution necessary (other than as a thought experiment)?
Simplifying to a 0->3 case: If changing (in the entire universe, say) all 0->1, all 1->2, and all 2->3 is judged as worse than changing one person’s 0->3 --for the reason that, for an even distrubution, the 1s and 2s would stay the same number and the 3s would increase with the 1s decreasing—then for what hypothetical distribution would it be even worse and for what hypothetical distribution would it be less bad? Is it worse if there are only 0s who all become 1s, or is it worse if there are only 2s who all become 3s? Is a dust speck classed as worse if you do it to someone being tortured than someone in a normal life or vice versa, or is it just as bad no matter what the distribution in which case the distribution is unimportant?
...then again, if one weighs matters solely on magnitude of individual change, then that greater difference can appear and disappear like a mirage when one shifts back and forth considering those involved collectively or reductionalistically… hrm.
|
Intuitively speaking, it seems inconsistent to state that 4A, 4B and 4C are acceptable, but A+B+C is not acceptable (where A is N people 0->1, B is N 1->2, C is N 2->3).
...the aim of the even distribution example is perhaps to show that by the magnitude-difference measurement the outcome can be worse, then break it down to show that for uneven cases too the suffering inflicted is equivalent and so for consistency one must continue to view it as worse...
(Again, this time shifting it to a 0-1-2, why would it be {unacceptable for N people to be 1->2 if and only if N people were also 0->1, but not unacceptable for N people to be 1->2 if 2N more people were 1->2} /and also/ {unacceptable for N people to be 0->1 if and only if N people ere also 1->2, but not unacceptable for N people to be 0->1 if 2N more people were 0->1}?)
The arbitrary points concept, rather than a smooth gradient, is also a reasonable point to consider. For a smooth gradient, the more pain anothe person is going through the more objectionable it is. For an arbitrary threshold, one could not find someone greatly to be an objectionable thing, yet find someone else suffering by a negligible amount more to be a significantly objectionable thing. Officially adopting such a cut-off point for sympathy—particularly one based on an arbitrarily-arrived-at brain structure rather than well-founded ethical/moral reasoning—would seem to be incompatible with true benevolence and desire for others’ well-being, suggesting that even if such arbitrary thresholds exist we should aim to act as though they did not.
(In other words, if we know that we are liable to not scale our contribution depending on the scale of (the results of) what we’re contributing towards, we should aim to take that into account and deliberately, manually, impose the scaling that otherwise would have been left out of our considerations. In this situation, if as a rule of thumb we tend to ignore low suffering and pay attention to high suffering, we should take care to acknowledge the unpleasantness of all suffering and act appropriately when considering decisions that could control such suffering.
(Preferable to not look back in the future and realise that, because of overreliance on hardwired rules of thumb, one had taken actions which betrayed one’s true system of values. If deliberately rewiring one’s brain to eliminate the cut-off crutches, say, one would hopefully prefer to at that time not be horrified by one’s previous actions, but rather be pleased at how much easier taking the same actions has become. Undesirable to resign oneself to being a slave of one’s default behaviour.)
Why would they all be at pain number zero? I’d expect them to be randomly distributed in all their traits unless specified otherwise. If I give them a mean pain of zero and a standard deviation of 1, there’d be no shortage of people with a pain level of 1,000,000,000,000. The same goes with any reasonable distribtion.
If you play around with my paradox a bit more, you can work out that if you have 1,000,000,000,000 people at pain level n, and one person at pain level zero, there must be some n between 0 and 999,999,999,999 such that it’s at least as bad to torture the one person as to give the rest dust specks.
Where is the marginal disutility like that? If you have 1,000,000,000 people at pain 999,999,999,999, and one at pain 0, would you rather torture the one, or give the 1,000,000,000,000 dust specks?
they haven’t reached my threshold for caring about the pain they are experiencing
So, are you saying that there’s a threshold x, such that any amount of pain less than x doesn’t matter? This would mean that increasing it from x-1 to x for 3^^^3 people would do nothing, but increasing it from x to x+1 would be horrible? Put another way, you have 3^^^3 people at a pain level of x-1, and you give them all one dust speck. This doesn’t matter. If you give them a second dust speck, now it’s an unimaginable atrocity.
I would expect a cutoff like this would be an approximation. You’d actually think that the marginal disutility of pain starts out at zero, and steadily increases until it approaches one. If this were true, one dust speck would bring the pain to 1, which would make the marginal disutility slightly above zero, so that would have some tiny amount of badness. If you multiply it by 3^^^3, now it’s unimaginable.
Why would they all be at pain number zero? I’d expect them to be randomly distributed in all their traits unless specified otherwise. If I give them a mean pain of zero and a standard deviation of 1, there’d be no shortage of people with a pain level of 1,000,000,000,000. The same goes with any reasonable distribtion.
It’s a thought experiment. The whole scenario is utterly far-fetched, so there’s no use in arguing that this or that detail of the thought experiment is what we should “expect” to find.
As such, I choose the version of the thought experiment that best teases out the dilemma that Yudkowsky is trying to explore, which concerns the question of whether we should consider pain to be denominated all in the same units—i.e. 3^^^3 x miniscule pain > 1 x torture—in our moral calculations.
EDIT: in response to the rest of your comment, see my reply to “Unnamed”.
...
If Yudkowsky had set up his thought experiment in this way, I would agree with him. But I don’t believe there’s any reason to expect there to be a distribution of pain in the way that you describe—or in any case it seems like Yudkowsky’s point should generalise, and I’m not sure that it does.
If all 3^^^3 + 1 people are on the pain level of 0, and then I have the choice of bringing them all up to pain level 1 or leaving 3^^^3 of them on pain level 0 and bringing one of them up to pain level 1,000,000,000,000 - I would choose the former.
I may have increased the number of pain units in existence, but my value computation doesn’t work by adding up “pain units”. I’m almost entirely unconcerned about 3^^^3 people experiencing pain level 1; they haven’t reached my threshold for caring about the pain they are experiencing. On the other hand, the individual being tortured is way above this threshold and so I do care about him.
I don’t know where the threshold(s) are, but I’m sure that if my brain was examined closely there would be some arbitrary points at which it decides that someone else’s pain level has become intolerable. Since these jumps are arbitrary, this would seem to break the idea that “pain units” are additive.
Is the distribution necessary (other than as a thought experiment)?
Simplifying to a 0->3 case: If changing (in the entire universe, say) all 0->1, all 1->2, and all 2->3 is judged as worse than changing one person’s 0->3 --for the reason that, for an even distrubution, the 1s and 2s would stay the same number and the 3s would increase with the 1s decreasing—then for what hypothetical distribution would it be even worse and for what hypothetical distribution would it be less bad? Is it worse if there are only 0s who all become 1s, or is it worse if there are only 2s who all become 3s? Is a dust speck classed as worse if you do it to someone being tortured than someone in a normal life or vice versa, or is it just as bad no matter what the distribution in which case the distribution is unimportant?
...then again, if one weighs matters solely on magnitude of individual change, then that greater difference can appear and disappear like a mirage when one shifts back and forth considering those involved collectively or reductionalistically… hrm. | Intuitively speaking, it seems inconsistent to state that 4A, 4B and 4C are acceptable, but A+B+C is not acceptable (where A is N people 0->1, B is N 1->2, C is N 2->3).
...the aim of the even distribution example is perhaps to show that by the magnitude-difference measurement the outcome can be worse, then break it down to show that for uneven cases too the suffering inflicted is equivalent and so for consistency one must continue to view it as worse...
(Again, this time shifting it to a 0-1-2, why would it be {unacceptable for N people to be 1->2 if and only if N people were also 0->1, but not unacceptable for N people to be 1->2 if 2N more people were 1->2} /and also/ {unacceptable for N people to be 0->1 if and only if N people ere also 1->2, but not unacceptable for N people to be 0->1 if 2N more people were 0->1}?)
The arbitrary points concept, rather than a smooth gradient, is also a reasonable point to consider. For a smooth gradient, the more pain anothe person is going through the more objectionable it is. For an arbitrary threshold, one could not find someone greatly to be an objectionable thing, yet find someone else suffering by a negligible amount more to be a significantly objectionable thing. Officially adopting such a cut-off point for sympathy—particularly one based on an arbitrarily-arrived-at brain structure rather than well-founded ethical/moral reasoning—would seem to be incompatible with true benevolence and desire for others’ well-being, suggesting that even if such arbitrary thresholds exist we should aim to act as though they did not.
(In other words, if we know that we are liable to not scale our contribution depending on the scale of (the results of) what we’re contributing towards, we should aim to take that into account and deliberately, manually, impose the scaling that otherwise would have been left out of our considerations. In this situation, if as a rule of thumb we tend to ignore low suffering and pay attention to high suffering, we should take care to acknowledge the unpleasantness of all suffering and act appropriately when considering decisions that could control such suffering.
(Preferable to not look back in the future and realise that, because of overreliance on hardwired rules of thumb, one had taken actions which betrayed one’s true system of values. If deliberately rewiring one’s brain to eliminate the cut-off crutches, say, one would hopefully prefer to at that time not be horrified by one’s previous actions, but rather be pleased at how much easier taking the same actions has become. Undesirable to resign oneself to being a slave of one’s default behaviour.)
Why would they all be at pain number zero? I’d expect them to be randomly distributed in all their traits unless specified otherwise. If I give them a mean pain of zero and a standard deviation of 1, there’d be no shortage of people with a pain level of 1,000,000,000,000. The same goes with any reasonable distribtion.
If you play around with my paradox a bit more, you can work out that if you have 1,000,000,000,000 people at pain level n, and one person at pain level zero, there must be some n between 0 and 999,999,999,999 such that it’s at least as bad to torture the one person as to give the rest dust specks.
Where is the marginal disutility like that? If you have 1,000,000,000 people at pain 999,999,999,999, and one at pain 0, would you rather torture the one, or give the 1,000,000,000,000 dust specks?
I would expect a cutoff like this would be an approximation. You’d actually think that the marginal disutility of pain starts out at zero, and steadily increases until it approaches one. If this were true, one dust speck would bring the pain to 1, which would make the marginal disutility slightly above zero, so that would have some tiny amount of badness. If you multiply it by 3^^^3, now it’s unimaginable.
It’s a thought experiment. The whole scenario is utterly far-fetched, so there’s no use in arguing that this or that detail of the thought experiment is what we should “expect” to find.
As such, I choose the version of the thought experiment that best teases out the dilemma that Yudkowsky is trying to explore, which concerns the question of whether we should consider pain to be denominated all in the same units—i.e. 3^^^3 x miniscule pain > 1 x torture—in our moral calculations.
EDIT: in response to the rest of your comment, see my reply to “Unnamed”.