There’s the question of linearity- but if you use big enough numbers you can brute force any nonlinear relationship, as Yudkowsky correctly pointed out some years ago. Take Kindly’s statement:
“There is some pair (N,T) such that (N people tortured for T seconds) is worse than (10^100 N people tortured for T-1 seconds), but I don’t know the exact values of N and T”
We can imagine a world where this statement is true (probably for a value of T really close to 1). And we can imagine knowing the correct values of N and T in that world. But even then, if a critical condition is met, it will be true that
“For all values of N, and for all T>1, there exists a value of A such that torturing N people for T seconds is better than torturing A*N people for T-1 seconds.”
Sure, the value of A may be larger than 10^100… But then, 3^^^3 is already vastly larger than 10^100. And if it weren’t big enough we could just throw a bigger number at the problem; there is no upper bound on the size of conceivable real numbers. So if we grant the critical condition in question, as Yudkowsky does/did in the original post…
Well, you basically have to concede that “torture” wins the argument, because even if you say that [hugenumber] of dust specks does not equate to a half-century of torture, that is NOT you winning the argument. That is just you trying to bid up the price of half a century of torture.
The critical condition that must be met here is simple, and is an underlying assumption of Yudkowsky’s original post: All forms of suffering and inconvenience are represented by some real number quantity, with commensurate units to all other forms of suffering and inconvenience.
In other words, the “torture one person rather than allow 3^^^3 dust specks” wins, quite predictably, if and only if it is true that that the ‘pain’ component of the utility function is measured in one and only one dimension.
So the question is, basically, do you measure your utility function in terms of a single input variable?
If you do, then either you bury your head in the sand and develop a severe case of scope insensitivity… or you conclude that there has to be some number of dust specks worse than a single lifetime of torture.
If you don’t, it raises a large complex of additional questions- but so far as I know, there may well be space to construct coherent, rational systems of ethics in that realm of ideas.
There’s the question of linearity- but if you use big enough numbers you can brute force any nonlinear relationship, as Yudkowsky correctly pointed out some years ago. Take Kindly’s statement:
“There is some pair (N,T) such that (N people tortured for T seconds) is worse than (10^100 N people tortured for T-1 seconds), but I don’t know the exact values of N and T”
We can imagine a world where this statement is true (probably for a value of T really close to 1). And we can imagine knowing the correct values of N and T in that world. But even then, if a critical condition is met, it will be true that
“For all values of N, and for all T>1, there exists a value of A such that torturing N people for T seconds is better than torturing A*N people for T-1 seconds.”
Sure, the value of A may be larger than 10^100… But then, 3^^^3 is already vastly larger than 10^100. And if it weren’t big enough we could just throw a bigger number at the problem; there is no upper bound on the size of conceivable real numbers. So if we grant the critical condition in question, as Yudkowsky does/did in the original post…
Well, you basically have to concede that “torture” wins the argument, because even if you say that [hugenumber] of dust specks does not equate to a half-century of torture, that is NOT you winning the argument. That is just you trying to bid up the price of half a century of torture.
The critical condition that must be met here is simple, and is an underlying assumption of Yudkowsky’s original post: All forms of suffering and inconvenience are represented by some real number quantity, with commensurate units to all other forms of suffering and inconvenience.
In other words, the “torture one person rather than allow 3^^^3 dust specks” wins, quite predictably, if and only if it is true that that the ‘pain’ component of the utility function is measured in one and only one dimension.
So the question is, basically, do you measure your utility function in terms of a single input variable?
If you do, then either you bury your head in the sand and develop a severe case of scope insensitivity… or you conclude that there has to be some number of dust specks worse than a single lifetime of torture.
If you don’t, it raises a large complex of additional questions- but so far as I know, there may well be space to construct coherent, rational systems of ethics in that realm of ideas.