Presumably, there’s going to be some variation with how the people are feeling. Given 3^^^3 people, this will mean that I can pretty much find someone under any given amount of pleasure/pain.
Suppose I find someone, Bob, with the same baseline happiness as the girl we’re suggesting torturing, Alice. I put a speck of dust in his eye. I then find someone with this nigh infinitesimally worse baseline, Charlie, and do it again. I keep this up until I get to a guy, Zack, that, after putting the dust speck in his eye, is at the same happiness as the guy we would be torturing if he is tortured.
To put numbers on this:
Alice and Bob have a base pain of 0, Charlie has 1, Dianne has 2, … Zack has 999,999,999,999. I then add one unit of pain to each person. Now Alice has 0, Bob has 1, Charlie has 2, … Yaana has 999,999,999,999, Zack has 1,000,000,000,000. I could instead torture one person. Alice has 1,000,000,000,000, Bob has 0, Charlie has 1, … Zack has 999,999,999,999. In other words, Bob has 0, Charlie has 1, Diane has 2, … Zack has 999,999,999,999, Alice has 1,000,000,000,000.
It’s the same numbers both ways—just different people. The only way you could decide which is better is if you care more or less than average about Alice.
Of course, this is just using 1,000,000,000,000 of 3^^^3 people. Add in another trillion, and now it’s like torturing two people. Add in another trillion, and it’s worse still. You get the idea.
Presumably, there’s going to be some variation with how the people are feeling. Given 3^^^3 people, this will mean that I can pretty much find someone under any given amount of pleasure/pain.
...
It’s the same numbers both ways—just different people. The only way you could decide which is better is if you care more or less than average about Alice.
If Yudkowsky had set up his thought experiment in this way, I would agree with him. But I don’t believe there’s any reason to expect there to be a distribution of pain in the way that you describe—or in any case it seems like Yudkowsky’s point should generalise, and I’m not sure that it does.
If all 3^^^3 + 1 people are on the pain level of 0, and then I have the choice of bringing them all up to pain level 1 or leaving 3^^^3 of them on pain level 0 and bringing one of them up to pain level 1,000,000,000,000 - I would choose the former.
I may have increased the number of pain units in existence, but my value computation doesn’t work by adding up “pain units”. I’m almost entirely unconcerned about 3^^^3 people experiencing pain level 1; they haven’t reached my threshold for caring about the pain they are experiencing. On the other hand, the individual being tortured is way above this threshold and so I do care about him.
I don’t know where the threshold(s) are, but I’m sure that if my brain was examined closely there would be some arbitrary points at which it decides that someone else’s pain level has become intolerable. Since these jumps are arbitrary, this would seem to break the idea that “pain units” are additive.
Is the distribution necessary (other than as a thought experiment)?
Simplifying to a 0->3 case: If changing (in the entire universe, say) all 0->1, all 1->2, and all 2->3 is judged as worse than changing one person’s 0->3 --for the reason that, for an even distrubution, the 1s and 2s would stay the same number and the 3s would increase with the 1s decreasing—then for what hypothetical distribution would it be even worse and for what hypothetical distribution would it be less bad? Is it worse if there are only 0s who all become 1s, or is it worse if there are only 2s who all become 3s? Is a dust speck classed as worse if you do it to someone being tortured than someone in a normal life or vice versa, or is it just as bad no matter what the distribution in which case the distribution is unimportant?
...then again, if one weighs matters solely on magnitude of individual change, then that greater difference can appear and disappear like a mirage when one shifts back and forth considering those involved collectively or reductionalistically… hrm.
|
Intuitively speaking, it seems inconsistent to state that 4A, 4B and 4C are acceptable, but A+B+C is not acceptable (where A is N people 0->1, B is N 1->2, C is N 2->3).
...the aim of the even distribution example is perhaps to show that by the magnitude-difference measurement the outcome can be worse, then break it down to show that for uneven cases too the suffering inflicted is equivalent and so for consistency one must continue to view it as worse...
(Again, this time shifting it to a 0-1-2, why would it be {unacceptable for N people to be 1->2 if and only if N people were also 0->1, but not unacceptable for N people to be 1->2 if 2N more people were 1->2} /and also/ {unacceptable for N people to be 0->1 if and only if N people ere also 1->2, but not unacceptable for N people to be 0->1 if 2N more people were 0->1}?)
The arbitrary points concept, rather than a smooth gradient, is also a reasonable point to consider. For a smooth gradient, the more pain anothe person is going through the more objectionable it is. For an arbitrary threshold, one could not find someone greatly to be an objectionable thing, yet find someone else suffering by a negligible amount more to be a significantly objectionable thing. Officially adopting such a cut-off point for sympathy—particularly one based on an arbitrarily-arrived-at brain structure rather than well-founded ethical/moral reasoning—would seem to be incompatible with true benevolence and desire for others’ well-being, suggesting that even if such arbitrary thresholds exist we should aim to act as though they did not.
(In other words, if we know that we are liable to not scale our contribution depending on the scale of (the results of) what we’re contributing towards, we should aim to take that into account and deliberately, manually, impose the scaling that otherwise would have been left out of our considerations. In this situation, if as a rule of thumb we tend to ignore low suffering and pay attention to high suffering, we should take care to acknowledge the unpleasantness of all suffering and act appropriately when considering decisions that could control such suffering.
(Preferable to not look back in the future and realise that, because of overreliance on hardwired rules of thumb, one had taken actions which betrayed one’s true system of values. If deliberately rewiring one’s brain to eliminate the cut-off crutches, say, one would hopefully prefer to at that time not be horrified by one’s previous actions, but rather be pleased at how much easier taking the same actions has become. Undesirable to resign oneself to being a slave of one’s default behaviour.)
Why would they all be at pain number zero? I’d expect them to be randomly distributed in all their traits unless specified otherwise. If I give them a mean pain of zero and a standard deviation of 1, there’d be no shortage of people with a pain level of 1,000,000,000,000. The same goes with any reasonable distribtion.
If you play around with my paradox a bit more, you can work out that if you have 1,000,000,000,000 people at pain level n, and one person at pain level zero, there must be some n between 0 and 999,999,999,999 such that it’s at least as bad to torture the one person as to give the rest dust specks.
Where is the marginal disutility like that? If you have 1,000,000,000 people at pain 999,999,999,999, and one at pain 0, would you rather torture the one, or give the 1,000,000,000,000 dust specks?
they haven’t reached my threshold for caring about the pain they are experiencing
So, are you saying that there’s a threshold x, such that any amount of pain less than x doesn’t matter? This would mean that increasing it from x-1 to x for 3^^^3 people would do nothing, but increasing it from x to x+1 would be horrible? Put another way, you have 3^^^3 people at a pain level of x-1, and you give them all one dust speck. This doesn’t matter. If you give them a second dust speck, now it’s an unimaginable atrocity.
I would expect a cutoff like this would be an approximation. You’d actually think that the marginal disutility of pain starts out at zero, and steadily increases until it approaches one. If this were true, one dust speck would bring the pain to 1, which would make the marginal disutility slightly above zero, so that would have some tiny amount of badness. If you multiply it by 3^^^3, now it’s unimaginable.
Why would they all be at pain number zero? I’d expect them to be randomly distributed in all their traits unless specified otherwise. If I give them a mean pain of zero and a standard deviation of 1, there’d be no shortage of people with a pain level of 1,000,000,000,000. The same goes with any reasonable distribtion.
It’s a thought experiment. The whole scenario is utterly far-fetched, so there’s no use in arguing that this or that detail of the thought experiment is what we should “expect” to find.
As such, I choose the version of the thought experiment that best teases out the dilemma that Yudkowsky is trying to explore, which concerns the question of whether we should consider pain to be denominated all in the same units—i.e. 3^^^3 x miniscule pain > 1 x torture—in our moral calculations.
EDIT: in response to the rest of your comment, see my reply to “Unnamed”.
To get Eliezer’s point, make the world more inconvenient. 3^^^3 people all with equivalent pain tolerances to you getting dust specks in their eyes, or torture one person for 50 years.
I believe the problem with this, is that you have given actual values (pain units), and equated the two levels of “torture” outlined in the original thought experiment. Specifically, equating one trillion humans with dust speck in eye and Alice being tortured.
So, what’s the problem? Is a dust speck incomparable to torture? A dust speck is comparable to something slightly worse than a dust speck, which is comparable to something slightly worse than that, etc. At some point, you’ll compare dust specks to torture. You may not live long enough to follow that out explicitly, just like you could never start with one grain of sand and keep adding them one at a time to get a beach, but the comparison still exists.
No comparison exists if, as I mentioned in my other post, the fleeting discomfort is lost in the noise of other minor nuisances and has no lasting effect. One blink, and the whole thing is forgotten forever, quickly replaced by an itch in your bum, flickering fluorescent light overhead, your roommate coughing loudly, or an annoying comment on LW.
the fleeting discomfort is lost in the noise of other minor nuisances and has no lasting effect.
One speck of sand will be lost in a beach, but adding a speck of sand will still make it a bigger beach, and adding 3^^^3 specks of sand will make it a black hole.
has no lasting effect.
You notice it while it’s happening. You forget about it eventually, but even if you were tortured for 3^^^3 years before finally dying, you’d forget it all the moment you die.
One speck of sand will be lost in a beach, but adding a speck of sand will still make it a bigger beach, and adding 3^^^3 specks of sand will make it a black hole.
I consider it a faulty analogy. Here is one I like better: if the said speck of dust disintegrates into nothing after an instant, there is no bigger beach and no black hole.
If you consider the disutility of the dust speck zero, because the brief annoyance will be forgotten, then can the disutility of the torture also be made into zero, if we merely add the stipulation that the tortured person will then have the memory of this torture completely erased and the state of their mind reverted to what it had been before the torture?
This is an interesting question, but it seems to be in a different realm. For example, it could be reformulated as follows: is this 50-year torture option that bad if it is parceled into 1 second chunks and any memory of each one is erased immediately, and it has no lasting side effects.
For the purpose of this discussion, I assume that it is 50 dismal years with all the memories associated and accumulated all the way through and thereafter. In that sense it is qualitatively in a different category than a dust speck. This might not be yours (or EY’s) interpretation.
One speck of sand will be lost in a beach, but adding a speck of sand will still make it a bigger beach, and adding 3^^^3 specks of sand will make it a black hole.
6 × 10^30 kilograms of sand on one beach on one inhabited planet will collapse it into a black hole, which is far, far smaller amount of mass than 3^^^3 molecules of silicon dioxide. But adding one molecule of silicon dioxide to each of 3^^^3 beaches on inhabited planets throughout as many universes as necessary seems to cause far less disutility than adding 6 × 10^30 kilograms of sand to one beach on one inhabited planet.
Is the problem that we’re unable to do math? You can’t possibly say one molecule of silicon dioxide is incomparable to 6 × 10^30 kilograms of sand, can you? They’re indisputably the same substance, after all; 6 × 10^55 molecules of SiO2 is 6 × 10^30 kilograms of sand. Even if you make the disutility nonlinear, you have to do something really, really extreme to overcome 3^^^3 . . . and of you do that, why, let’s substitute in 3^^^^3 or 3^^^^^3 instead.
Is the problem that we are failing to evaluate what happens if everybody else makes the same decision? If 6 × 10^55 people were given the decision and they all chose the molecule, 3^^^3 inhabited planets are converted into black holes, while if they made the other only 6 × 10^55 planets would be. So when faced with an option that seems to cause no disutility, must we annihilate seven billion people because it would if enough other people made our decision it would be far worse than if we and all of them made the other?
My point wasn’t so much that it will cause a black hole, as that a tiny amount of disutility times 3^^^3 is going to be unimaginably horrible, regardless of how small 3^^^3.
Is the problem that we are failing to evaluate what happens if everybody else makes the same decision?
That’s not the problem at all. Thinking about that is a good sanity check.If it’s good to make that decision once it’s better to make it 10^30 times. However, it’s only a sanity check. Everybody isn’t going to make the same decision as you, so there’s no reason to assume they will.
6 × 10^30 kilograms of sand on one beach on one inhabited planet will collapse it into a black hole, which is far, far smaller amount of mass than 3^^^3 molecules of silicon dioxide. But adding one molecule of silicon dioxide to each of 3^^^3 beaches on inhabited planets throughout as many universes as necessary seems to cause far less disutility than adding 6 × 10^30 kilograms of sand to one beach on one inhabited planet.
Analogy does not fit. Dust specks have an approximately known small negative utility. The benefit or detriment of adding sand to the beaches is not specified one way or the other. If it was specified then I’d be able to tell you whether it sounds better or worse than destroying a planet.
The original thought experiment is used to provide a pure example of quantifying and comparing arbitrary levels of suffering as a test to see whether we support such a type of utilitarian consequentialism.
By comparing torture to torture, you are changing the scenario to test a slightly weaker version of the original type of utilitarian consequentialism where you do quantify and compare arbitrary changes to absolute levels of suffering with arbitrary absolute levels of suffering but not necessarily allowing the two instances of absolute levels of suffering to be arbitrary with respect to each other.
If anyone could rewrite this comment to be comprehensible I would appreciate it.
Here’s a good way of looking at the problem.
Presumably, there’s going to be some variation with how the people are feeling. Given 3^^^3 people, this will mean that I can pretty much find someone under any given amount of pleasure/pain.
Suppose I find someone, Bob, with the same baseline happiness as the girl we’re suggesting torturing, Alice. I put a speck of dust in his eye. I then find someone with this nigh infinitesimally worse baseline, Charlie, and do it again. I keep this up until I get to a guy, Zack, that, after putting the dust speck in his eye, is at the same happiness as the guy we would be torturing if he is tortured.
To put numbers on this:
Alice and Bob have a base pain of 0, Charlie has 1, Dianne has 2, … Zack has 999,999,999,999. I then add one unit of pain to each person. Now Alice has 0, Bob has 1, Charlie has 2, … Yaana has 999,999,999,999, Zack has 1,000,000,000,000. I could instead torture one person. Alice has 1,000,000,000,000, Bob has 0, Charlie has 1, … Zack has 999,999,999,999. In other words, Bob has 0, Charlie has 1, Diane has 2, … Zack has 999,999,999,999, Alice has 1,000,000,000,000.
It’s the same numbers both ways—just different people. The only way you could decide which is better is if you care more or less than average about Alice.
Of course, this is just using 1,000,000,000,000 of 3^^^3 people. Add in another trillion, and now it’s like torturing two people. Add in another trillion, and it’s worse still. You get the idea.
...
If Yudkowsky had set up his thought experiment in this way, I would agree with him. But I don’t believe there’s any reason to expect there to be a distribution of pain in the way that you describe—or in any case it seems like Yudkowsky’s point should generalise, and I’m not sure that it does.
If all 3^^^3 + 1 people are on the pain level of 0, and then I have the choice of bringing them all up to pain level 1 or leaving 3^^^3 of them on pain level 0 and bringing one of them up to pain level 1,000,000,000,000 - I would choose the former.
I may have increased the number of pain units in existence, but my value computation doesn’t work by adding up “pain units”. I’m almost entirely unconcerned about 3^^^3 people experiencing pain level 1; they haven’t reached my threshold for caring about the pain they are experiencing. On the other hand, the individual being tortured is way above this threshold and so I do care about him.
I don’t know where the threshold(s) are, but I’m sure that if my brain was examined closely there would be some arbitrary points at which it decides that someone else’s pain level has become intolerable. Since these jumps are arbitrary, this would seem to break the idea that “pain units” are additive.
Is the distribution necessary (other than as a thought experiment)?
Simplifying to a 0->3 case: If changing (in the entire universe, say) all 0->1, all 1->2, and all 2->3 is judged as worse than changing one person’s 0->3 --for the reason that, for an even distrubution, the 1s and 2s would stay the same number and the 3s would increase with the 1s decreasing—then for what hypothetical distribution would it be even worse and for what hypothetical distribution would it be less bad? Is it worse if there are only 0s who all become 1s, or is it worse if there are only 2s who all become 3s? Is a dust speck classed as worse if you do it to someone being tortured than someone in a normal life or vice versa, or is it just as bad no matter what the distribution in which case the distribution is unimportant?
...then again, if one weighs matters solely on magnitude of individual change, then that greater difference can appear and disappear like a mirage when one shifts back and forth considering those involved collectively or reductionalistically… hrm. | Intuitively speaking, it seems inconsistent to state that 4A, 4B and 4C are acceptable, but A+B+C is not acceptable (where A is N people 0->1, B is N 1->2, C is N 2->3).
...the aim of the even distribution example is perhaps to show that by the magnitude-difference measurement the outcome can be worse, then break it down to show that for uneven cases too the suffering inflicted is equivalent and so for consistency one must continue to view it as worse...
(Again, this time shifting it to a 0-1-2, why would it be {unacceptable for N people to be 1->2 if and only if N people were also 0->1, but not unacceptable for N people to be 1->2 if 2N more people were 1->2} /and also/ {unacceptable for N people to be 0->1 if and only if N people ere also 1->2, but not unacceptable for N people to be 0->1 if 2N more people were 0->1}?)
The arbitrary points concept, rather than a smooth gradient, is also a reasonable point to consider. For a smooth gradient, the more pain anothe person is going through the more objectionable it is. For an arbitrary threshold, one could not find someone greatly to be an objectionable thing, yet find someone else suffering by a negligible amount more to be a significantly objectionable thing. Officially adopting such a cut-off point for sympathy—particularly one based on an arbitrarily-arrived-at brain structure rather than well-founded ethical/moral reasoning—would seem to be incompatible with true benevolence and desire for others’ well-being, suggesting that even if such arbitrary thresholds exist we should aim to act as though they did not.
(In other words, if we know that we are liable to not scale our contribution depending on the scale of (the results of) what we’re contributing towards, we should aim to take that into account and deliberately, manually, impose the scaling that otherwise would have been left out of our considerations. In this situation, if as a rule of thumb we tend to ignore low suffering and pay attention to high suffering, we should take care to acknowledge the unpleasantness of all suffering and act appropriately when considering decisions that could control such suffering.
(Preferable to not look back in the future and realise that, because of overreliance on hardwired rules of thumb, one had taken actions which betrayed one’s true system of values. If deliberately rewiring one’s brain to eliminate the cut-off crutches, say, one would hopefully prefer to at that time not be horrified by one’s previous actions, but rather be pleased at how much easier taking the same actions has become. Undesirable to resign oneself to being a slave of one’s default behaviour.)
Why would they all be at pain number zero? I’d expect them to be randomly distributed in all their traits unless specified otherwise. If I give them a mean pain of zero and a standard deviation of 1, there’d be no shortage of people with a pain level of 1,000,000,000,000. The same goes with any reasonable distribtion.
If you play around with my paradox a bit more, you can work out that if you have 1,000,000,000,000 people at pain level n, and one person at pain level zero, there must be some n between 0 and 999,999,999,999 such that it’s at least as bad to torture the one person as to give the rest dust specks.
Where is the marginal disutility like that? If you have 1,000,000,000 people at pain 999,999,999,999, and one at pain 0, would you rather torture the one, or give the 1,000,000,000,000 dust specks?
I would expect a cutoff like this would be an approximation. You’d actually think that the marginal disutility of pain starts out at zero, and steadily increases until it approaches one. If this were true, one dust speck would bring the pain to 1, which would make the marginal disutility slightly above zero, so that would have some tiny amount of badness. If you multiply it by 3^^^3, now it’s unimaginable.
It’s a thought experiment. The whole scenario is utterly far-fetched, so there’s no use in arguing that this or that detail of the thought experiment is what we should “expect” to find.
As such, I choose the version of the thought experiment that best teases out the dilemma that Yudkowsky is trying to explore, which concerns the question of whether we should consider pain to be denominated all in the same units—i.e. 3^^^3 x miniscule pain > 1 x torture—in our moral calculations.
EDIT: in response to the rest of your comment, see my reply to “Unnamed”.
To get Eliezer’s point, make the world more inconvenient. 3^^^3 people all with equivalent pain tolerances to you getting dust specks in their eyes, or torture one person for 50 years.
I believe the problem with this, is that you have given actual values (pain units), and equated the two levels of “torture” outlined in the original thought experiment. Specifically, equating one trillion humans with dust speck in eye and Alice being tortured.
So, what’s the problem? Is a dust speck incomparable to torture? A dust speck is comparable to something slightly worse than a dust speck, which is comparable to something slightly worse than that, etc. At some point, you’ll compare dust specks to torture. You may not live long enough to follow that out explicitly, just like you could never start with one grain of sand and keep adding them one at a time to get a beach, but the comparison still exists.
No comparison exists if, as I mentioned in my other post, the fleeting discomfort is lost in the noise of other minor nuisances and has no lasting effect. One blink, and the whole thing is forgotten forever, quickly replaced by an itch in your bum, flickering fluorescent light overhead, your roommate coughing loudly, or an annoying comment on LW.
One speck of sand will be lost in a beach, but adding a speck of sand will still make it a bigger beach, and adding 3^^^3 specks of sand will make it a black hole.
You notice it while it’s happening. You forget about it eventually, but even if you were tortured for 3^^^3 years before finally dying, you’d forget it all the moment you die.
I consider it a faulty analogy. Here is one I like better: if the said speck of dust disintegrates into nothing after an instant, there is no bigger beach and no black hole.
If you consider the disutility of the dust speck zero, because the brief annoyance will be forgotten, then can the disutility of the torture also be made into zero, if we merely add the stipulation that the tortured person will then have the memory of this torture completely erased and the state of their mind reverted to what it had been before the torture?
This is an interesting question, but it seems to be in a different realm. For example, it could be reformulated as follows: is this 50-year torture option that bad if it is parceled into 1 second chunks and any memory of each one is erased immediately, and it has no lasting side effects.
For the purpose of this discussion, I assume that it is 50 dismal years with all the memories associated and accumulated all the way through and thereafter. In that sense it is qualitatively in a different category than a dust speck. This might not be yours (or EY’s) interpretation.
6 × 10^30 kilograms of sand on one beach on one inhabited planet will collapse it into a black hole, which is far, far smaller amount of mass than 3^^^3 molecules of silicon dioxide. But adding one molecule of silicon dioxide to each of 3^^^3 beaches on inhabited planets throughout as many universes as necessary seems to cause far less disutility than adding 6 × 10^30 kilograms of sand to one beach on one inhabited planet.
Is the problem that we’re unable to do math? You can’t possibly say one molecule of silicon dioxide is incomparable to 6 × 10^30 kilograms of sand, can you? They’re indisputably the same substance, after all; 6 × 10^55 molecules of SiO2 is 6 × 10^30 kilograms of sand. Even if you make the disutility nonlinear, you have to do something really, really extreme to overcome 3^^^3 . . . and of you do that, why, let’s substitute in 3^^^^3 or 3^^^^^3 instead.
Is the problem that we are failing to evaluate what happens if everybody else makes the same decision? If 6 × 10^55 people were given the decision and they all chose the molecule, 3^^^3 inhabited planets are converted into black holes, while if they made the other only 6 × 10^55 planets would be. So when faced with an option that seems to cause no disutility, must we annihilate seven billion people because it would if enough other people made our decision it would be far worse than if we and all of them made the other?
My point wasn’t so much that it will cause a black hole, as that a tiny amount of disutility times 3^^^3 is going to be unimaginably horrible, regardless of how small 3^^^3.
That’s not the problem at all. Thinking about that is a good sanity check.If it’s good to make that decision once it’s better to make it 10^30 times. However, it’s only a sanity check. Everybody isn’t going to make the same decision as you, so there’s no reason to assume they will.
Analogy does not fit. Dust specks have an approximately known small negative utility. The benefit or detriment of adding sand to the beaches is not specified one way or the other. If it was specified then I’d be able to tell you whether it sounds better or worse than destroying a planet.
The original thought experiment is used to provide a pure example of quantifying and comparing arbitrary levels of suffering as a test to see whether we support such a type of utilitarian consequentialism.
By comparing torture to torture, you are changing the scenario to test a slightly weaker version of the original type of utilitarian consequentialism where you do quantify and compare arbitrary changes to absolute levels of suffering with arbitrary absolute levels of suffering but not necessarily allowing the two instances of absolute levels of suffering to be arbitrary with respect to each other.
If anyone could rewrite this comment to be comprehensible I would appreciate it.