I don’t think it’s ever okay to literally discount utility.
I’m actually (and this is my intro point) not sure it’s possible to avoid doing this. If we have some ice cream that you and I both want, we must necessarily engage in some weighing of our interest in the ice cream. There are objective measures we can use (i.e. how much we’d be willing to pay for it; how many hours of labor we’d sacrifice to obtain it, etc.), but I’m fairly confident there is not an Objective Measurement of True Utility that Tells Us Who Absolutely Deserves the Ice Cream. Much utilitarian thinking appears contingent on this philosophical fiction (perhaps this point is itself the primary one). Any selection of an objective criteria either implicitly or explicitly discounts something about one of the agents—willingness to pay favors the rich, willingness to spend time may favor the young or unemployed, etc.
As for Bob the Rapist, the issue is not that he enjoys rape because it hurts other people, but that he is knows it causes harm and doesn’t care. This may surprise you, but the vast majority of humanity is not comprised of unweighted aggregate utilitarians. Though I think our actual disagreement may not exist—if he engages in fulfilling rape fantasies with consenting adults, or makes a simulated world for himself, or designs a sex-bot to enjoy being raped (which is itself an ontologically convoluted issue, but I digress), I’m not objecting. So it could be my, “Discounting his utility” is your “dismissal for game-theoretic reasons” are essentially the same thing. If we call my system U’ versus standard U, perhaps the argument is that any kind of applied utilitarian framework needs to look more like U’ than like U.
Without writing a whole article on the issue, there does appear to be a difference between forcibly raping someone and wearing fur. Off the cuff, I’d guess that this issue is marginal effect. Animal lovers tend to object to the existence of fur coats generally—the step from 1 fur coat or 100 fur coats or 100,000 fur coats is smaller than the step from 0 to one, and they do not “feel” each fur coat, in the same way that a person “feels” being raped.
I’m not disagreeing that crimes are bad, just that this should be stated as saying that whatever utility they gives the perpetrator is overruled by the disutility they give the victim.
This has occasional theoretical implications: for example, if Bob was insane and incapable of realizing that his actions harmed his victim, then we still perform exactly the same calculation and stop him, even though the “he knows it causes harm and he doesn’t care” argument is void.
Even if the panda analogy isn’t perfect, there are suitably many analogies for acts where two people’s utility is in competition for non-evil reasons: for example, if we both want pizza and there’s only one slice left, my taking it isn’t bad in itself, but if you are hungrier it may be that my (perfectly valid) desire for pizza is a utilitarian loss and should be prevented.
Given that this same principle of “if two people’s interests are in conflict, we stop the person with the lower stake from pursuing that interest at the expense of the person with the higher stake” is sufficient to explain why crimes are bad, I don’t see why another explanation is needed.
On an unrelated note, I’ve heard people suggest that it’s a bad idea to use rape as an example in a case where any other example is possible because it’s an extreme emotional trigger for certain people. I’m going to try to use murder as my go-to example of an immoral hurtful act from now on, on the grounds that it conveniently removes its victims from the set of people with triggers and preferences.
I’m not disagreeing that crimes are bad, just that this should be stated as saying that whatever utility they gives the perpetrator is overruled by the disutility they give the victim.
That’s kind of the problem I’m getting at. Suppose we could torture one person and film it, creating a superlatively good video that would make N sadists very happy when they watched it, significantly because they value its authenticity. It seems that, if you choose torture over dust specks, you are similarly obliged to choose torture video over no video once N is sufficiently large, whatever sufficiently large means. Interestingly this applies even if there exist very close but inferior substitutes—N just needs to be larger. On the other hand, discounting non-consensual sadism resolves this as don’t torture.
The central problem may be one of measurement, one of incentives (we don’t want people cultivating non-consensual sadistic desires), or a combination of the two. Perhaps my goals are more pragmatic than conceptual.
The central problem may be one of measurement, one of incentives (we don’t want people cultivating non-consensual sadistic desires), or a combination of the two.
I think this is pretty much it. We don’t want people to want to rape coma patients. We don’t want coma-rape to become common enough that people are afraid of having it happen to them. Similarly, if we decide to make this film, everyone has to be afraid that they or someone they know could be the person picked to be tortured, and the idea of torturing innocents becomes more normal. In general, caution should be applied in situations like this, even if no extreme disutility is immediately obvious (See http://lesswrong.com/lw/v0/ethical_inhibitions/).
I’m actually (and this is my intro point) not sure it’s possible to avoid doing this. If we have some ice cream that you and I both want, we must necessarily engage in some weighing of our interest in the ice cream. There are objective measures we can use (i.e. how much we’d be willing to pay for it; how many hours of labor we’d sacrifice to obtain it, etc.), but I’m fairly confident there is not an Objective Measurement of True Utility that Tells Us Who Absolutely Deserves the Ice Cream. Much utilitarian thinking appears contingent on this philosophical fiction (perhaps this point is itself the primary one). Any selection of an objective criteria either implicitly or explicitly discounts something about one of the agents—willingness to pay favors the rich, willingness to spend time may favor the young or unemployed, etc.
As for Bob the Rapist, the issue is not that he enjoys rape because it hurts other people, but that he is knows it causes harm and doesn’t care. This may surprise you, but the vast majority of humanity is not comprised of unweighted aggregate utilitarians. Though I think our actual disagreement may not exist—if he engages in fulfilling rape fantasies with consenting adults, or makes a simulated world for himself, or designs a sex-bot to enjoy being raped (which is itself an ontologically convoluted issue, but I digress), I’m not objecting. So it could be my, “Discounting his utility” is your “dismissal for game-theoretic reasons” are essentially the same thing. If we call my system U’ versus standard U, perhaps the argument is that any kind of applied utilitarian framework needs to look more like U’ than like U.
Without writing a whole article on the issue, there does appear to be a difference between forcibly raping someone and wearing fur. Off the cuff, I’d guess that this issue is marginal effect. Animal lovers tend to object to the existence of fur coats generally—the step from 1 fur coat or 100 fur coats or 100,000 fur coats is smaller than the step from 0 to one, and they do not “feel” each fur coat, in the same way that a person “feels” being raped.
I’m not disagreeing that crimes are bad, just that this should be stated as saying that whatever utility they gives the perpetrator is overruled by the disutility they give the victim.
This has occasional theoretical implications: for example, if Bob was insane and incapable of realizing that his actions harmed his victim, then we still perform exactly the same calculation and stop him, even though the “he knows it causes harm and he doesn’t care” argument is void.
Even if the panda analogy isn’t perfect, there are suitably many analogies for acts where two people’s utility is in competition for non-evil reasons: for example, if we both want pizza and there’s only one slice left, my taking it isn’t bad in itself, but if you are hungrier it may be that my (perfectly valid) desire for pizza is a utilitarian loss and should be prevented.
Given that this same principle of “if two people’s interests are in conflict, we stop the person with the lower stake from pursuing that interest at the expense of the person with the higher stake” is sufficient to explain why crimes are bad, I don’t see why another explanation is needed.
On an unrelated note, I’ve heard people suggest that it’s a bad idea to use rape as an example in a case where any other example is possible because it’s an extreme emotional trigger for certain people. I’m going to try to use murder as my go-to example of an immoral hurtful act from now on, on the grounds that it conveniently removes its victims from the set of people with triggers and preferences.
That’s kind of the problem I’m getting at. Suppose we could torture one person and film it, creating a superlatively good video that would make N sadists very happy when they watched it, significantly because they value its authenticity. It seems that, if you choose torture over dust specks, you are similarly obliged to choose torture video over no video once N is sufficiently large, whatever sufficiently large means. Interestingly this applies even if there exist very close but inferior substitutes—N just needs to be larger. On the other hand, discounting non-consensual sadism resolves this as don’t torture.
The central problem may be one of measurement, one of incentives (we don’t want people cultivating non-consensual sadistic desires), or a combination of the two. Perhaps my goals are more pragmatic than conceptual.
I think this is pretty much it. We don’t want people to want to rape coma patients. We don’t want coma-rape to become common enough that people are afraid of having it happen to them. Similarly, if we decide to make this film, everyone has to be afraid that they or someone they know could be the person picked to be tortured, and the idea of torturing innocents becomes more normal. In general, caution should be applied in situations like this, even if no extreme disutility is immediately obvious (See http://lesswrong.com/lw/v0/ethical_inhibitions/).