It doesn’t work if you continuously increase the the severity of the minor inconvenience/reduce the severity of torture and try to find where the two become qualitatively comparable, as pointed out in this reply. The only way I see it work is to assign zero disutility to specks (I advocated it originally to be at the noise level). Then I thought that it is possible to have the argument work reasonably well even with a non-zero disutility, but at this point I don’t see how.
Utility(world) = e^-(# of specks) + X*e^-(# of people getting tortured), where X is some constant larger than 1/(1-1/e) in the incommensurate case, and less than that in the commensurate limit.
Of course, this assumes some stuff about the number of people getting tortured / specked already—but that can be handled with a simple offset.
I don’t think this addresses the point in the link. What happens when you go from specks to something slightly more nasty, like a pinch? Or slightly increase the time it takes to get rid of the speck? You ought to raise the disutility limit. Or if you reduce the length of torture, you have to lower the disutility amount from torturing one person. Eventually, the two intersect, unless you are willing to make a sharp qualitative boundary between two very similar events.
Yes, the two intersect. That’s what happens when you make things quantitative. Just because we are uncertain about where two things should, morally, intersect, does not mean that the intersection itself should be “fuzzy.”
The point is that without arbitrarily drawing the specks/torture boundary somewhere between x stabbed toes and x+epsilon stabbed toes the suggested utility function does not work.
Hm, how can I help you see why I don’t think this is a problem?
How about this. The following two sentences contain exactly the same content to me:
“Without arbitrarily drawing the specks/torture boundary somewhere, the suggested utility function does not work.”
“Without drawing the specks/torture boundary somewhere, the suggested utility function does not work.”
Why? Because morality is already arbitrary. Every element is arbitrary. The question is not “can we tolerate an arbitrary boundary,” but “should this boundary be here or not?”
Are you saying that you are OK with having x stabbed toes being incommensurate with torture, but x+1 being commensurate ? This would be a very peculiar utility function.
Yes, that is what I am saying. One can deduce from this that I don’t find it so peculiar.
To be clear, this doesn’t reflect at all what goes on in my personal decision-making process, since I’m human. However, I don’t find it any stranger than, say, having torture be arbitrarily 3^3^2 times worse than a dust speck, rather than 3^3^2 + 5.
Sarcasm time: I mean, seriously—are you honestly saying that at 3^3^2 + 1 dust specks, it’s worse than torture, but at 3^3^2 − 1, it’s better? That’s so… arbitrary. What’s so special about those two dust specks? That would be so… peculiar.
As soon as you allow the arbitrary size of a number to be “peculiar,” there is no longer any such thing as a non-peculiar set of preferences. That’s just how consistent preferences work. Discounting sets of preferences on account of “strangeness and arbitrariness” isn’t worth the effort, really.
It doesn’t work if you continuously increase the the severity of the minor inconvenience/reduce the severity of torture and try to find where the two become qualitatively comparable, as pointed out in this reply. The only way I see it work is to assign zero disutility to specks (I advocated it originally to be at the noise level). Then I thought that it is possible to have the argument work reasonably well even with a non-zero disutility, but at this point I don’t see how.
Utility(world) = e^-(# of specks) + X*e^-(# of people getting tortured), where X is some constant larger than 1/(1-1/e) in the incommensurate case, and less than that in the commensurate limit.
Of course, this assumes some stuff about the number of people getting tortured / specked already—but that can be handled with a simple offset.
I don’t think this addresses the point in the link. What happens when you go from specks to something slightly more nasty, like a pinch? Or slightly increase the time it takes to get rid of the speck? You ought to raise the disutility limit. Or if you reduce the length of torture, you have to lower the disutility amount from torturing one person. Eventually, the two intersect, unless you are willing to make a sharp qualitative boundary between two very similar events.
Yes, the two intersect. That’s what happens when you make things quantitative. Just because we are uncertain about where two things should, morally, intersect, does not mean that the intersection itself should be “fuzzy.”
The point is that without arbitrarily drawing the specks/torture boundary somewhere between x stabbed toes and x+epsilon stabbed toes the suggested utility function does not work.
Hm, how can I help you see why I don’t think this is a problem?
How about this. The following two sentences contain exactly the same content to me:
“Without arbitrarily drawing the specks/torture boundary somewhere, the suggested utility function does not work.”
“Without drawing the specks/torture boundary somewhere, the suggested utility function does not work.”
Why? Because morality is already arbitrary. Every element is arbitrary. The question is not “can we tolerate an arbitrary boundary,” but “should this boundary be here or not?”
Are you saying that you are OK with having x stabbed toes being incommensurate with torture, but x+1 being commensurate ? This would be a very peculiar utility function.
Yes, that is what I am saying. One can deduce from this that I don’t find it so peculiar.
To be clear, this doesn’t reflect at all what goes on in my personal decision-making process, since I’m human. However, I don’t find it any stranger than, say, having torture be arbitrarily 3^3^2 times worse than a dust speck, rather than 3^3^2 + 5.
Sarcasm time: I mean, seriously—are you honestly saying that at 3^3^2 + 1 dust specks, it’s worse than torture, but at 3^3^2 − 1, it’s better? That’s so… arbitrary. What’s so special about those two dust specks? That would be so… peculiar.
As soon as you allow the arbitrary size of a number to be “peculiar,” there is no longer any such thing as a non-peculiar set of preferences. That’s just how consistent preferences work. Discounting sets of preferences on account of “strangeness and arbitrariness” isn’t worth the effort, really.
I don’t mean peculiar in any negative sense, just that it would not be suitable for goal optimization.
Is that really what you meant? Huh.
Could you elaborate?