Frank, I think a utility function like that is a mathematical abstraction, and nothing more. People do not, in fact, have scalar-ordered ranked preferences across every possible hypothetical outcome. They are essentially indifferent between a wide range of choices. And anyway, I’m sure that there is sufficient agreement among moral agents to permit the useful aggregation of their varied, and sometimes conflicting, notions of what is preferable into a single useful metric. And even if we could do that, I’m not sure that such a function would correspond with all (or even most) of the standard ways that we use moral language.
The statement that X is wrong can be taken to mean that X has bad consequences according to some metric. It can also mean (or be used to perform the functions of) the following variants:
(1) I do not approve of X.
(2) X makes me squeamish.
(3) Most people in [relevant group] would disapprove of X.
(4) X is not an exemplar of an action that corresponds with what I believe to be appropriate rules to live by.
(5) [Same as 4, but change reference point to social group]
(6) X is not an action that would be performed by a virtuous person operating in similar circumstances.
(7) I do not want X to occur.
(8) Do not do X.
That is probably not even an exhaustive list. Most uses of moral language probably blur the lines between a large number of these statements. Even if you want to limit the discussion to consequences, however, you have to pick a metric; if you are referring only to “bad” or “undesireable” consequences, you have to incorporate some other form of moral reasoning in order to articulate why your particular metric is constitutive or representative of what is wrong.
Hence, I think the problem with you argument is that (a) I’m not sure that there is enough agreement about morality to make a universal scalar ordering meaningful, and (b) a scalar ordering would be meaningless for many plausible variants of what morality means.
Frank, I think a utility function like that is a mathematical abstraction, and nothing more. People do not, in fact, have scalar-ordered ranked preferences across every possible hypothetical outcome. They are essentially indifferent between a wide range of choices. And anyway, I’m sure that there is sufficient agreement among moral agents to permit the useful aggregation of their varied, and sometimes conflicting, notions of what is preferable into a single useful metric. And even if we could do that, I’m not sure that such a function would correspond with all (or even most) of the standard ways that we use moral language.
The statement that X is wrong can be taken to mean that X has bad consequences according to some metric. It can also mean (or be used to perform the functions of) the following variants:
(1) I do not approve of X.
(2) X makes me squeamish.
(3) Most people in [relevant group] would disapprove of X.
(4) X is not an exemplar of an action that corresponds with what I believe to be appropriate rules to live by.
(5) [Same as 4, but change reference point to social group]
(6) X is not an action that would be performed by a virtuous person operating in similar circumstances.
(7) I do not want X to occur.
(8) Do not do X.
That is probably not even an exhaustive list. Most uses of moral language probably blur the lines between a large number of these statements. Even if you want to limit the discussion to consequences, however, you have to pick a metric; if you are referring only to “bad” or “undesireable” consequences, you have to incorporate some other form of moral reasoning in order to articulate why your particular metric is constitutive or representative of what is wrong.
Hence, I think the problem with you argument is that (a) I’m not sure that there is enough agreement about morality to make a universal scalar ordering meaningful, and (b) a scalar ordering would be meaningless for many plausible variants of what morality means.