g: that’s exactly what I’m saying. In fact, you can show something stronger than that.
Suppose that we have an agent with rational preferences, and who is minimally ethical, in the sense that they always prefer fewer people with dust specks in their eyes, and fewer people being tortured. This seems to be something everyone agrees on.
Now, because they have rational preferences, we know that a bounded utility function consistent with their preferences exists. Furthermore, the fact that they are minimally ethical implies that this function is monotone in the number of people being tortured, and monotone in the number of people with dust specks in their eyes. The combination of a bound on the utility function, plus the monotonicity of their preferences, means that the utility function has a well-defined limit as the number of people with specks in their eyes goes to infinity. However, the existence of the limit doesn’t tell you what it is—it may be any value within the bounds.
Concretely, we can supply utility functions that justify either choice, and are consistent with minimal ethics. (I’ll assume the bound is the [0,1] interval.) In particular, all disutility functions of the form:
U(T, S) = A(T/(T+1)) + B(S/(S+1))
satisfy minimal ethics, for all positive A and B such that A plus B is less than one. Since A and B are free parameters, you can choose them to make either specks or torture preferred.
Likewise, Robin and Eliezer seem to have an implicit disutility function of the form
U_ER(T, S) = AT + BS
If you normalize to get [0,1] bounds, you can make something up like
U’(T, S) = (AT + BS)/(AT + BS + 1).
Now, note U’ also satisfies minimal ethics, in that if T is set to 1, then in the limit as S goes to infinity, U’ will still always go to one and exceed A/(A+1). So that’s why they tend to have the intuition that torture is the right answer. (Incidentally, this disproves my suggestion that bounded utility functions vitiate the force of E’s argument—but the bounds proved helpful in the end by letting us use limit analysis. So my focus on this point was accidentally correct!)
Now, consider yet another disutility function,
U″(T,S) = (ST + S)/ (ST + S + 1)
This is also minimally ethical, and doesn’t have any of the free parameters that Tom didn’t like. But this function also always implies a preference for any number of dust specks to even a single instance of torture.
Basically, if you think the answer is obvious, then you have to make some additional assumptions about the structure of the aggregate preference relation.
g: that’s exactly what I’m saying. In fact, you can show something stronger than that.
Suppose that we have an agent with rational preferences, and who is minimally ethical, in the sense that they always prefer fewer people with dust specks in their eyes, and fewer people being tortured. This seems to be something everyone agrees on.
Now, because they have rational preferences, we know that a bounded utility function consistent with their preferences exists. Furthermore, the fact that they are minimally ethical implies that this function is monotone in the number of people being tortured, and monotone in the number of people with dust specks in their eyes. The combination of a bound on the utility function, plus the monotonicity of their preferences, means that the utility function has a well-defined limit as the number of people with specks in their eyes goes to infinity. However, the existence of the limit doesn’t tell you what it is—it may be any value within the bounds.
Concretely, we can supply utility functions that justify either choice, and are consistent with minimal ethics. (I’ll assume the bound is the [0,1] interval.) In particular, all disutility functions of the form:
U(T, S) = A(T/(T+1)) + B(S/(S+1))
satisfy minimal ethics, for all positive A and B such that A plus B is less than one. Since A and B are free parameters, you can choose them to make either specks or torture preferred.
Likewise, Robin and Eliezer seem to have an implicit disutility function of the form
U_ER(T, S) = AT + BS
If you normalize to get [0,1] bounds, you can make something up like
U’(T, S) = (AT + BS)/(AT + BS + 1).
Now, note U’ also satisfies minimal ethics, in that if T is set to 1, then in the limit as S goes to infinity, U’ will still always go to one and exceed A/(A+1). So that’s why they tend to have the intuition that torture is the right answer. (Incidentally, this disproves my suggestion that bounded utility functions vitiate the force of E’s argument—but the bounds proved helpful in the end by letting us use limit analysis. So my focus on this point was accidentally correct!)
Now, consider yet another disutility function,
U″(T,S) = (ST + S)/ (ST + S + 1)
This is also minimally ethical, and doesn’t have any of the free parameters that Tom didn’t like. But this function also always implies a preference for any number of dust specks to even a single instance of torture.
Basically, if you think the answer is obvious, then you have to make some additional assumptions about the structure of the aggregate preference relation.