I’m not disagreeing that crimes are bad, just that this should be stated as saying that whatever utility they gives the perpetrator is overruled by the disutility they give the victim.
This has occasional theoretical implications: for example, if Bob was insane and incapable of realizing that his actions harmed his victim, then we still perform exactly the same calculation and stop him, even though the “he knows it causes harm and he doesn’t care” argument is void.
Even if the panda analogy isn’t perfect, there are suitably many analogies for acts where two people’s utility is in competition for non-evil reasons: for example, if we both want pizza and there’s only one slice left, my taking it isn’t bad in itself, but if you are hungrier it may be that my (perfectly valid) desire for pizza is a utilitarian loss and should be prevented.
Given that this same principle of “if two people’s interests are in conflict, we stop the person with the lower stake from pursuing that interest at the expense of the person with the higher stake” is sufficient to explain why crimes are bad, I don’t see why another explanation is needed.
On an unrelated note, I’ve heard people suggest that it’s a bad idea to use rape as an example in a case where any other example is possible because it’s an extreme emotional trigger for certain people. I’m going to try to use murder as my go-to example of an immoral hurtful act from now on, on the grounds that it conveniently removes its victims from the set of people with triggers and preferences.
I’m not disagreeing that crimes are bad, just that this should be stated as saying that whatever utility they gives the perpetrator is overruled by the disutility they give the victim.
That’s kind of the problem I’m getting at. Suppose we could torture one person and film it, creating a superlatively good video that would make N sadists very happy when they watched it, significantly because they value its authenticity. It seems that, if you choose torture over dust specks, you are similarly obliged to choose torture video over no video once N is sufficiently large, whatever sufficiently large means. Interestingly this applies even if there exist very close but inferior substitutes—N just needs to be larger. On the other hand, discounting non-consensual sadism resolves this as don’t torture.
The central problem may be one of measurement, one of incentives (we don’t want people cultivating non-consensual sadistic desires), or a combination of the two. Perhaps my goals are more pragmatic than conceptual.
The central problem may be one of measurement, one of incentives (we don’t want people cultivating non-consensual sadistic desires), or a combination of the two.
I think this is pretty much it. We don’t want people to want to rape coma patients. We don’t want coma-rape to become common enough that people are afraid of having it happen to them. Similarly, if we decide to make this film, everyone has to be afraid that they or someone they know could be the person picked to be tortured, and the idea of torturing innocents becomes more normal. In general, caution should be applied in situations like this, even if no extreme disutility is immediately obvious (See http://lesswrong.com/lw/v0/ethical_inhibitions/).
I’m not disagreeing that crimes are bad, just that this should be stated as saying that whatever utility they gives the perpetrator is overruled by the disutility they give the victim.
This has occasional theoretical implications: for example, if Bob was insane and incapable of realizing that his actions harmed his victim, then we still perform exactly the same calculation and stop him, even though the “he knows it causes harm and he doesn’t care” argument is void.
Even if the panda analogy isn’t perfect, there are suitably many analogies for acts where two people’s utility is in competition for non-evil reasons: for example, if we both want pizza and there’s only one slice left, my taking it isn’t bad in itself, but if you are hungrier it may be that my (perfectly valid) desire for pizza is a utilitarian loss and should be prevented.
Given that this same principle of “if two people’s interests are in conflict, we stop the person with the lower stake from pursuing that interest at the expense of the person with the higher stake” is sufficient to explain why crimes are bad, I don’t see why another explanation is needed.
On an unrelated note, I’ve heard people suggest that it’s a bad idea to use rape as an example in a case where any other example is possible because it’s an extreme emotional trigger for certain people. I’m going to try to use murder as my go-to example of an immoral hurtful act from now on, on the grounds that it conveniently removes its victims from the set of people with triggers and preferences.
That’s kind of the problem I’m getting at. Suppose we could torture one person and film it, creating a superlatively good video that would make N sadists very happy when they watched it, significantly because they value its authenticity. It seems that, if you choose torture over dust specks, you are similarly obliged to choose torture video over no video once N is sufficiently large, whatever sufficiently large means. Interestingly this applies even if there exist very close but inferior substitutes—N just needs to be larger. On the other hand, discounting non-consensual sadism resolves this as don’t torture.
The central problem may be one of measurement, one of incentives (we don’t want people cultivating non-consensual sadistic desires), or a combination of the two. Perhaps my goals are more pragmatic than conceptual.
I think this is pretty much it. We don’t want people to want to rape coma patients. We don’t want coma-rape to become common enough that people are afraid of having it happen to them. Similarly, if we decide to make this film, everyone has to be afraid that they or someone they know could be the person picked to be tortured, and the idea of torturing innocents becomes more normal. In general, caution should be applied in situations like this, even if no extreme disutility is immediately obvious (See http://lesswrong.com/lw/v0/ethical_inhibitions/).