When our intuitions in a particular case contradict the moral theory we thought we held, we need some justification for amending the moral theory other than “I want to.”
I think the point is, Utilitarianism is very very flexible, and whatever it is about us that tells us to prefer life extension should already be there—the only question is, how do we formalize that?
Presumably that depends on how we came to think we held that moral theory in the first place.
If I assert moral theory X because it does the best job of reflecting my moral intuitions, for example, then when I discover that my moral intuitions in a particular case contradict X, it makes sense to amend X to better reflect my moral intuitions.
That said, I certainly agree that if I assert X for some reason unrelated to my moral intuitions, then modifying X based on my moral intuitions is a very questionable move.
It sounds like you’re presuming that the latter is generally the case when people assert utilitarianism?
Preferring utilitarianism is a moral intuition, just like preferring Life Extension. The former’s a general intuition, the latter’s an intuition about a specific case.
So it’s not a priori clear which intuition to modify (general or specific) when the two conflict.
I don’t agree that preferring utilitarianism is necessarily a moral intuition, though I agree that it can be.
Suppose I have moral intuitions about various (real and hypothetical) situations that lead me to make certain judgments about those situations. Call the ordered set of situations S and the ordered set of judgments J.
Suppose you come along and articulate a formal moral theory T which also (and independently) produces J when evaluated in the context of S.
In this case, I wouldn’t call my preference for T a moral intuition at all. I’m simply choosing T over its competitors because it better predicts my observations of the world; the fact that those observations are about moral judgments is beside the point.
If I subsequently make judgment Jn about situation Sn, and then evaluate T in the context of Sn and get Jn’ instead, there’s no particular reason for me to change my judgment of Sn (assuming I even could). I would only do that if I had substituted T for my moral intuitions… but I haven’t done that. I’ve merely observed that evaluating T does a good job of predicting my moral intuitions (despite failing in the case of Sn).
If you come along with an alternate theory T2 that gets the same results T did except that it predicts Jn given Sn, I might prefer T2 to T for the same reason I previously preferred T to its competitors. This, too, would not be a moral intuition.
Well if you view moral theories as if they were scientific hypothesis, you could reason in the following way: If a moral theory/hypothesis makes a counter intuitive prediction you could 1) reject the your intuition or 2) reject the hypothesis (“I want to”) 3) revise your hypothesis.
It would be practical if one could actually try out an moral theory, but I don’t see how one could go about doing that. . .
Right—I don’t claim any of my moral intuitions to be true or correct; I’m an error theorist, when it comes down to it.
But I do want my intuitions to be consistent with each other. So if I have the intuition that utility is the only thing I value for its own sake, and I have the intuition that Life Extension is better than Replacement, then something’s gotta give.
When our intuitions in a particular case contradict the moral theory we thought we held, we need some justification for amending the moral theory other than “I want to.”
I think the point is, Utilitarianism is very very flexible, and whatever it is about us that tells us to prefer life extension should already be there—the only question is, how do we formalize that?
Presumably that depends on how we came to think we held that moral theory in the first place.
If I assert moral theory X because it does the best job of reflecting my moral intuitions, for example, then when I discover that my moral intuitions in a particular case contradict X, it makes sense to amend X to better reflect my moral intuitions.
That said, I certainly agree that if I assert X for some reason unrelated to my moral intuitions, then modifying X based on my moral intuitions is a very questionable move.
It sounds like you’re presuming that the latter is generally the case when people assert utilitarianism?
Preferring utilitarianism is a moral intuition, just like preferring Life Extension. The former’s a general intuition, the latter’s an intuition about a specific case.
So it’s not a priori clear which intuition to modify (general or specific) when the two conflict.
I don’t agree that preferring utilitarianism is necessarily a moral intuition, though I agree that it can be.
Suppose I have moral intuitions about various (real and hypothetical) situations that lead me to make certain judgments about those situations. Call the ordered set of situations S and the ordered set of judgments J.
Suppose you come along and articulate a formal moral theory T which also (and independently) produces J when evaluated in the context of S.
In this case, I wouldn’t call my preference for T a moral intuition at all. I’m simply choosing T over its competitors because it better predicts my observations of the world; the fact that those observations are about moral judgments is beside the point.
If I subsequently make judgment Jn about situation Sn, and then evaluate T in the context of Sn and get Jn’ instead, there’s no particular reason for me to change my judgment of Sn (assuming I even could). I would only do that if I had substituted T for my moral intuitions… but I haven’t done that. I’ve merely observed that evaluating T does a good job of predicting my moral intuitions (despite failing in the case of Sn).
If you come along with an alternate theory T2 that gets the same results T did except that it predicts Jn given Sn, I might prefer T2 to T for the same reason I previously preferred T to its competitors. This, too, would not be a moral intuition.
Well if you view moral theories as if they were scientific hypothesis, you could reason in the following way: If a moral theory/hypothesis makes a counter intuitive prediction you could 1) reject the your intuition or 2) reject the hypothesis (“I want to”) 3) revise your hypothesis.
It would be practical if one could actually try out an moral theory, but I don’t see how one could go about doing that. . .
Right—I don’t claim any of my moral intuitions to be true or correct; I’m an error theorist, when it comes down to it.
But I do want my intuitions to be consistent with each other. So if I have the intuition that utility is the only thing I value for its own sake, and I have the intuition that Life Extension is better than Replacement, then something’s gotta give.