A good utilitarian calculation would count their moral anguish, but it would also note that the best long-term equilibrium is one in which the original moral attitude has disappeared.
Mm? If I have a strong inclination to do X, and a strong moral intuition that “X is wrong”, and I suffer anguish because of the conflict, how do you conclude that the best result is that the moral intuition disappears… either in that particular case, or in general? I mean, I happen to agree with you about the particular case you mention, but I don’t see how you reached it.
It’s worth noting, incidentally, that in addition to “eliminate the intuition” and “eliminate the inclination” there is the option of “eliminate the anguish.” That is, I might reach a point where I want to do X, and I think X is wrong, and I experience conflict between those impulses, and I work out some optimal balance between those conflicting impulses, and I am not anguished by this in any way.
Mm? If I have a strong inclination to do X, and a strong moral intuition that “X is wrong”, and I suffer anguish because of the conflict, how do you conclude that the best result is that the moral intuition disappears… either in that particular case, or in general? I mean, I happen to agree with you about the particular case you mention, but I don’t see how you reached it.
Good question; here’s a formal answer:
We can, in the long run, keep the inclination and eliminate the intuition, keep the intuition and drop the inclination, or keep both. The utility of each is:
side with inclination: fun of (less pain of) indulging in inclination (what’s intrinsic to the action itself, i.e. the pain of being tortured but not the discomfort we have that it’s being inflicted), less effort needed for (and plus any second order effects of) eliminating the intuition
side with the intuition: zero less effort needed for (and plus any second order effects of) eliminating the inclination
status quo: fun of less pain of indulging in the inclination at the rates people will engage in given moral disapproval, less pain felt by those with the intuition
I think that this adequately explains 1) why the OP felt ambivalent about counting moral intuitions—even if they count in principle, you really should ignore them sometimes—and 2) why your and my intuitions agree in the homosexuality case: gay sex is fun and doesn’t intrinsically harm anyone, homophobia brings little joy to even its holders, and it would take much less effort to eliminate homophobia than homosexuality.
This model grounds common moral intuitions like “if it doesn’t harm anybody [implied: beyond the feelings of those who morally disapprove], why ban it?,” that preferences that are inborn should take precedence over those that are inculcated, and so on. And it seems that even the other side in this debate is operating from this framework: hence their propensity to argue that homosexuality is not inborn, or that homophobia is, or that there would be very bad second-order effects of the disappearance of homophobia (you would marry your box turtle!)
It’s worth noting, incidentally, that in addition to “eliminate the intuition” and “eliminate the inclination” there is the option of “eliminate the anguish.” That is, I might reach a point where I want to do X, and I think X is wrong, and I experience conflict between those impulses, and I work out some optimal balance between those conflicting impulses, and I am not anguished by this in any way.
I don’t really think this can work over the long term. An individual might be mistaken about how much moral anguish she’d experience over something, but in the long run, those moral intuitions that don’t affect you emotionally aren’t going to affect you behaviorally. (This is the reason real-life utilitarians don’t Feed The Utility Monster unless they have an enabling peer group.)
Ah, I see what you mean. So if the world changed such that eliminating the inclination cost less than eliminating the intuition—say, we discover a cheap-to-produce pill that makes everybody who takes it heterosexual without any other side-effects—a good utilitarian would, by the same token, conclude that the best long-term equilibrium is one in which the inclination disappeared. Yes?
Agreed, though that simply extends the definition of “long-term equilibrium” a few generations.
Anyway, cool; I’d misunderstood your original claim to be somewhat more sweeping than what you actually meant, which is why I was uncertain. Thanks for clarifying!
Mm? If I have a strong inclination to do X, and a strong moral intuition that “X is wrong”, and I suffer anguish because of the conflict, how do you conclude that the best result is that the moral intuition disappears… either in that particular case, or in general? I mean, I happen to agree with you about the particular case you mention, but I don’t see how you reached it.
It’s worth noting, incidentally, that in addition to “eliminate the intuition” and “eliminate the inclination” there is the option of “eliminate the anguish.” That is, I might reach a point where I want to do X, and I think X is wrong, and I experience conflict between those impulses, and I work out some optimal balance between those conflicting impulses, and I am not anguished by this in any way.
Good question; here’s a formal answer:
We can, in the long run, keep the inclination and eliminate the intuition, keep the intuition and drop the inclination, or keep both. The utility of each is:
side with inclination: fun of (less pain of) indulging in inclination (what’s intrinsic to the action itself, i.e. the pain of being tortured but not the discomfort we have that it’s being inflicted), less effort needed for (and plus any second order effects of) eliminating the intuition side with the intuition: zero less effort needed for (and plus any second order effects of) eliminating the inclination status quo: fun of less pain of indulging in the inclination at the rates people will engage in given moral disapproval, less pain felt by those with the intuition
I think that this adequately explains 1) why the OP felt ambivalent about counting moral intuitions—even if they count in principle, you really should ignore them sometimes—and 2) why your and my intuitions agree in the homosexuality case: gay sex is fun and doesn’t intrinsically harm anyone, homophobia brings little joy to even its holders, and it would take much less effort to eliminate homophobia than homosexuality.
This model grounds common moral intuitions like “if it doesn’t harm anybody [implied: beyond the feelings of those who morally disapprove], why ban it?,” that preferences that are inborn should take precedence over those that are inculcated, and so on. And it seems that even the other side in this debate is operating from this framework: hence their propensity to argue that homosexuality is not inborn, or that homophobia is, or that there would be very bad second-order effects of the disappearance of homophobia (you would marry your box turtle!)
I don’t really think this can work over the long term. An individual might be mistaken about how much moral anguish she’d experience over something, but in the long run, those moral intuitions that don’t affect you emotionally aren’t going to affect you behaviorally. (This is the reason real-life utilitarians don’t Feed The Utility Monster unless they have an enabling peer group.)
Ah, I see what you mean. So if the world changed such that eliminating the inclination cost less than eliminating the intuition—say, we discover a cheap-to-produce pill that makes everybody who takes it heterosexual without any other side-effects—a good utilitarian would, by the same token, conclude that the best long-term equilibrium is one in which the inclination disappeared. Yes?
In principle, I suppose so (though if you’re in a relationship, a pill to change your orientation is hardly low-cost!)
Agreed, though that simply extends the definition of “long-term equilibrium” a few generations.
Anyway, cool; I’d misunderstood your original claim to be somewhat more sweeping than what you actually meant, which is why I was uncertain. Thanks for clarifying!