I think I have to go with the dust specks. Tomorrow, all 3^^^3 of those people will have forgotten entirely about the speck of dust. It is an event nearly indistinguishable from thermal noise. People, all of them everywhere, get dust specks in their eyes just going about their daily lives with no ill effect.
The torture actually hurts someone. And in a way that’s rather non-recoverable. Recoverability plays a large part in my moral calculations.
But there’s a limit to how many times I can make that trade. 3^^^3 people is a LOT of people, and it doesn’t take a significant fraction of THAT at all before I have to stop saving torture victims, lest everyone everywhere’s lives consist of nothing but a sandblaster to the face.
What you’re doing there is positing a “qualitative threshold” of sorts where the anti-hedons from the dust specks cause absolutely zero disutility whatsoever. This can be an acceptable real-world evaluation within loaded subjective context.
However, the problem states that the dust specks have non-zero disutility. This means that they do have some sort of predicted net negative impact somewhere. If that impact is merely to slow down the brain’s visual recognition of one word by even 0.03 seconds, in a manner that is directly causal and where the dust speck would have avoided this delay, then over 3^^^3 people that is still more man-hours of work lost than the sum of all lifetimes of all humans on Earth to this day ever. If that is not a tragic loss much more dire than one person being tortured, I don’t see what could be. And I’m obviously being generous there with that “0.03 seconds” estimate.
Theoretically, all this accumulated lost time could mean the difference between the extinction or survival of the human race to a pan-galactic super-cataclysmic event, simply by way of throwing us off the particular course of planck-level-exactly-timed course of events that would have allowed us to find a way to survive just barely by a few (total, relatively absolute) seconds too close for comfort.
That last is assuming the deciding agent has the superintelligence power to actually compute this. If calculating from unknown future causal utilities, and the expected utility of a dust speck is still negative non-zero, then it is simple abstraction of the above example and the rational choice is still simply the torture.
If you ask me the slightly different question, where I choose between 50 years of torture applied to one man, or between 3^^^3 specks of dust falling one each into 3^^^3 people’s eyes and also all humanity being destroyed, I will give a different answer. In particular, I will abstain, because my moral calculation would then favor the torture over the destruction of the human race, but I have a built-in failure mode where I refuse to torture someone even if I somehow think it is the right thing to do.
But that is not the question I was asked. We could also have the man tortured for fifty years and then the human race gets wiped out BECAUSE the pan-galactic cataclysm favors civilizations who don’t make the choice to torture people rather than face trivial inconveniences.
Consider this alternate proposal:
Hello Sir and/or Madam:
I am trying to collect 3^^^3 signatures in order to prevent a man from being tortured for 50 years. Would you be willing to accept a single speck of dust into your eye towards this goal? Perhaps more? You may sign as many times as you are comfortable with. I eagerly await your response.
Sincerely,
rkyeun
PS: Do you know any masochists who might enjoy 50 years of torture?
We did specify no long-term consequences—otherwise the argument instantly passes, just because at least 3^^7625597484986 people would certainly die in car accidents due to blinking. (3^^^3 is 3 to the power of that.)
I admit the argument of long-term “side effects” like extinction of the human race was gratuitous on my part. I’m just intuitively convinced that such possibilities would count towards the expected disutility of the dust motes in a superintelligent perfect rationalist’s calculations. They might even be the only reason there is any expected disutility at all, for all I know.
Otherwise, my puny tall-monkey brain wiring has a hard time imagining how a micro-fractional anti-hedon would actually count for anything other than absolute zero expected utility in the calculations of any agent with imperfect knowledge.
Sure. Admittedly, when there are 3^^^3 humans around, torturing me for fifty years is also such a negligible amount of suffering relative to the current lived human experience that it, too, has an expected cost that rounds to zero in the calculations of any agent with imperfect knowledge, unless they have some particular reason to care about me, which in that world is vanishingly unlikely.
When put like that, my original post / arguments sure seem not to have been thought through as much as I thought I had.
Now, rather than thinking the solution obvious, I’m leaning more towards the idea that this eventually reduces to the problem of building a good utility function, one that also assigns the right utility value to the expected utility calculated by other beings based on unknown (or known?) other utility functions that may or may not irrationally assign disproportionate disutility to respective hedon-values.
Otherwise, it’s rather obvious that a perfect superintelligence might find a way to make the tortured victim enjoy the torture and become enhanced by it, while also remaining a productive member of society during all fifty years of torture (or some other completely ideal solution we can’t even remotely imagine) - though this might be in direct contradiction with the implicit premise of torture being inherently bad, depending on interpretation/definition/etc.
EDIT: Which, upon reading up a bit more of the old comments on the issue, seems fairly close to the general consensus back in late 2007.
I think I have to go with the dust specks. Tomorrow, all 3^^^3 of those people will have forgotten entirely about the speck of dust. It is an event nearly indistinguishable from thermal noise. People, all of them everywhere, get dust specks in their eyes just going about their daily lives with no ill effect.
The torture actually hurts someone. And in a way that’s rather non-recoverable. Recoverability plays a large part in my moral calculations.
But there’s a limit to how many times I can make that trade. 3^^^3 people is a LOT of people, and it doesn’t take a significant fraction of THAT at all before I have to stop saving torture victims, lest everyone everywhere’s lives consist of nothing but a sandblaster to the face.
What you’re doing there is positing a “qualitative threshold” of sorts where the anti-hedons from the dust specks cause absolutely zero disutility whatsoever. This can be an acceptable real-world evaluation within loaded subjective context.
However, the problem states that the dust specks have non-zero disutility. This means that they do have some sort of predicted net negative impact somewhere. If that impact is merely to slow down the brain’s visual recognition of one word by even 0.03 seconds, in a manner that is directly causal and where the dust speck would have avoided this delay, then over 3^^^3 people that is still more man-hours of work lost than the sum of all lifetimes of all humans on Earth to this day ever. If that is not a tragic loss much more dire than one person being tortured, I don’t see what could be. And I’m obviously being generous there with that “0.03 seconds” estimate.
Theoretically, all this accumulated lost time could mean the difference between the extinction or survival of the human race to a pan-galactic super-cataclysmic event, simply by way of throwing us off the particular course of planck-level-exactly-timed course of events that would have allowed us to find a way to survive just barely by a few (total, relatively absolute) seconds too close for comfort.
That last is assuming the deciding agent has the superintelligence power to actually compute this. If calculating from unknown future causal utilities, and the expected utility of a dust speck is still negative non-zero, then it is simple abstraction of the above example and the rational choice is still simply the torture.
If you ask me the slightly different question, where I choose between 50 years of torture applied to one man, or between 3^^^3 specks of dust falling one each into 3^^^3 people’s eyes and also all humanity being destroyed, I will give a different answer. In particular, I will abstain, because my moral calculation would then favor the torture over the destruction of the human race, but I have a built-in failure mode where I refuse to torture someone even if I somehow think it is the right thing to do.
But that is not the question I was asked. We could also have the man tortured for fifty years and then the human race gets wiped out BECAUSE the pan-galactic cataclysm favors civilizations who don’t make the choice to torture people rather than face trivial inconveniences.
Consider this alternate proposal:
Hello Sir and/or Madam:
I am trying to collect 3^^^3 signatures in order to prevent a man from being tortured for 50 years. Would you be willing to accept a single speck of dust into your eye towards this goal? Perhaps more? You may sign as many times as you are comfortable with. I eagerly await your response.
Sincerely,
rkyeun
PS: Do you know any masochists who might enjoy 50 years of torture?
BCC: 3^^^3-1 other people.
We did specify no long-term consequences—otherwise the argument instantly passes, just because at least 3^^7625597484986 people would certainly die in car accidents due to blinking. (3^^^3 is 3 to the power of that.)
If you still use “^” to refer to Knuth’s up-arrow notation, then 3^^^3 != 3^(3^^26).
3^^^3 = 3^^(3^^3) = 3^^(3^27) != 3^(3^^27)
Fixed.
I admit the argument of long-term “side effects” like extinction of the human race was gratuitous on my part. I’m just intuitively convinced that such possibilities would count towards the expected disutility of the dust motes in a superintelligent perfect rationalist’s calculations. They might even be the only reason there is any expected disutility at all, for all I know.
Otherwise, my puny tall-monkey brain wiring has a hard time imagining how a micro-fractional anti-hedon would actually count for anything other than absolute zero expected utility in the calculations of any agent with imperfect knowledge.
Sure. Admittedly, when there are 3^^^3 humans around, torturing me for fifty years is also such a negligible amount of suffering relative to the current lived human experience that it, too, has an expected cost that rounds to zero in the calculations of any agent with imperfect knowledge, unless they have some particular reason to care about me, which in that world is vanishingly unlikely.
Heh.
When put like that, my original post / arguments sure seem not to have been thought through as much as I thought I had.
Now, rather than thinking the solution obvious, I’m leaning more towards the idea that this eventually reduces to the problem of building a good utility function, one that also assigns the right utility value to the expected utility calculated by other beings based on unknown (or known?) other utility functions that may or may not irrationally assign disproportionate disutility to respective hedon-values.
Otherwise, it’s rather obvious that a perfect superintelligence might find a way to make the tortured victim enjoy the torture and become enhanced by it, while also remaining a productive member of society during all fifty years of torture (or some other completely ideal solution we can’t even remotely imagine) - though this might be in direct contradiction with the implicit premise of torture being inherently bad, depending on interpretation/definition/etc.
EDIT: Which, upon reading up a bit more of the old comments on the issue, seems fairly close to the general consensus back in late 2007.