But this ignores actually doing the math! Suppose it is known that she would prefer abcd_z’s company to the other fellow’s, and abcd_z would prefer her company to no one’s, and the other fellow would prefer her company to no one’s, but his preference is smaller than theirs. The “stealing other people’s partners is bad” is putting precedence above greatest good.* The claim that it’s existentially risky is one that doesn’t require utilitarianism; a selfish person is more concerned about those sorts of incentives than a utilitarian.
I am of the opinion that utilitarianism is wrong wrong wrong, but treating it as a moral decision procedure is even more wrong. If you’re going to be a utilitarian, be a utilitarian at the meta level: think about what moral decision procedure will lead you (given your cognitive and other limitations) to maximize utility in the long run. I think there are many good reasons to believe that doing the math at every decision point will not be the optimal procedure in this sense. Of course, it would be if you were a fully informed, perfectly rational superbeing with infinite willpower and effectively infinite processing speed, but alas, even I cannot yet claim that status.
Given this unfortunate state of affairs, I suspect it is actually a better idea for most utilitarians to commit themselves to a policy like “Don’t steal someone else’s partner” rather than attempt to do the math every time they are faced with the decision. Of course, there may still be times when its just blindingly obvious that the math is in favor of stealing, in which case screw the policy.
Given this unfortunate state of affairs, I suspect it is actually a better idea for most utilitarians to commit themselves to a policy like “Don’t steal someone else’s partner” rather than attempt to do the math every time they are faced with the decision.
See the paragraph that follows on second order effects. In the context of flirting with people in clubs, rather than attempting to break up established relationships, the policy of “don’t interrupt someone else’s flirting” is probably suboptimal.
(Did you not think that paragraph explained the point? Should I have put the asterisk up higher? I’m confused why you made this objection to what you did, when a sibling comment engaged with my discussion of second order effects directly.)
Of course, there may still be times when its just blindingly obvious that the math is in favor of stealing, in which case screw the policy.
The primary reason to have a policy like this is because you trust your offline math more than your online math, in which case if the policy doesn’t have a clear escape clause you reasoned through offline, you should trust the policy even when your online math screams that you shouldn’t.
Did you not think that paragraph explained the point? Should I have put the asterisk up higher?
There is a much simpler explanation: I completely misunderstood what you meant by “second order effects” and then didn’t really read the rest of the footnote because I considered it irrelevant to what I was interested in talking about. How embarrassing. I did admit that I am not yet fully informed and perfectly rational, though.
Utilitarianism is certainly correct. You can observe this by watching people make decisions under uncertainty. Preferences aren’t merely ordinal.
But yes, doing the math has its own utility cost, so many decisions are better off handled with approximations. This is how you get things like the Allais paradox.
I’m not sure what “moral” means here. The goal of a gene is to copy itself. Ethics isn’t about altruism.
I am of the opinion that utilitarianism is wrong wrong wrong, but treating it as a moral decision procedure is even more wrong. If you’re going to be a utilitarian, be a utilitarian at the meta level: think about what moral decision procedure will lead you (given your cognitive and other limitations) to maximize utility in the long run. I think there are many good reasons to believe that doing the math at every decision point will not be the optimal procedure in this sense. Of course, it would be if you were a fully informed, perfectly rational superbeing with infinite willpower and effectively infinite processing speed, but alas, even I cannot yet claim that status.
Given this unfortunate state of affairs, I suspect it is actually a better idea for most utilitarians to commit themselves to a policy like “Don’t steal someone else’s partner” rather than attempt to do the math every time they are faced with the decision. Of course, there may still be times when its just blindingly obvious that the math is in favor of stealing, in which case screw the policy.
See the paragraph that follows on second order effects. In the context of flirting with people in clubs, rather than attempting to break up established relationships, the policy of “don’t interrupt someone else’s flirting” is probably suboptimal.
(Did you not think that paragraph explained the point? Should I have put the asterisk up higher? I’m confused why you made this objection to what you did, when a sibling comment engaged with my discussion of second order effects directly.)
The primary reason to have a policy like this is because you trust your offline math more than your online math, in which case if the policy doesn’t have a clear escape clause you reasoned through offline, you should trust the policy even when your online math screams that you shouldn’t.
There is a much simpler explanation: I completely misunderstood what you meant by “second order effects” and then didn’t really read the rest of the footnote because I considered it irrelevant to what I was interested in talking about. How embarrassing. I did admit that I am not yet fully informed and perfectly rational, though.
Thanks for the feedback! I’ll be more careful about using that phrase in the future.
Utilitarianism is certainly correct. You can observe this by watching people make decisions under uncertainty. Preferences aren’t merely ordinal.
But yes, doing the math has its own utility cost, so many decisions are better off handled with approximations. This is how you get things like the Allais paradox.
I’m not sure what “moral” means here. The goal of a gene is to copy itself. Ethics isn’t about altruism.