Preferring utilitarianism is a moral intuition, just like preferring Life Extension. The former’s a general intuition, the latter’s an intuition about a specific case.
So it’s not a priori clear which intuition to modify (general or specific) when the two conflict.
I don’t agree that preferring utilitarianism is necessarily a moral intuition, though I agree that it can be.
Suppose I have moral intuitions about various (real and hypothetical) situations that lead me to make certain judgments about those situations. Call the ordered set of situations S and the ordered set of judgments J.
Suppose you come along and articulate a formal moral theory T which also (and independently) produces J when evaluated in the context of S.
In this case, I wouldn’t call my preference for T a moral intuition at all. I’m simply choosing T over its competitors because it better predicts my observations of the world; the fact that those observations are about moral judgments is beside the point.
If I subsequently make judgment Jn about situation Sn, and then evaluate T in the context of Sn and get Jn’ instead, there’s no particular reason for me to change my judgment of Sn (assuming I even could). I would only do that if I had substituted T for my moral intuitions… but I haven’t done that. I’ve merely observed that evaluating T does a good job of predicting my moral intuitions (despite failing in the case of Sn).
If you come along with an alternate theory T2 that gets the same results T did except that it predicts Jn given Sn, I might prefer T2 to T for the same reason I previously preferred T to its competitors. This, too, would not be a moral intuition.
Preferring utilitarianism is a moral intuition, just like preferring Life Extension. The former’s a general intuition, the latter’s an intuition about a specific case.
So it’s not a priori clear which intuition to modify (general or specific) when the two conflict.
I don’t agree that preferring utilitarianism is necessarily a moral intuition, though I agree that it can be.
Suppose I have moral intuitions about various (real and hypothetical) situations that lead me to make certain judgments about those situations. Call the ordered set of situations S and the ordered set of judgments J.
Suppose you come along and articulate a formal moral theory T which also (and independently) produces J when evaluated in the context of S.
In this case, I wouldn’t call my preference for T a moral intuition at all. I’m simply choosing T over its competitors because it better predicts my observations of the world; the fact that those observations are about moral judgments is beside the point.
If I subsequently make judgment Jn about situation Sn, and then evaluate T in the context of Sn and get Jn’ instead, there’s no particular reason for me to change my judgment of Sn (assuming I even could). I would only do that if I had substituted T for my moral intuitions… but I haven’t done that. I’ve merely observed that evaluating T does a good job of predicting my moral intuitions (despite failing in the case of Sn).
If you come along with an alternate theory T2 that gets the same results T did except that it predicts Jn given Sn, I might prefer T2 to T for the same reason I previously preferred T to its competitors. This, too, would not be a moral intuition.