Ethics/morality is generally understood to be a way to answer the question, “what is the right thing to do [in some circumstance / class of circumstances]?” (or, in other words, “what ought I to do [in this circumstance / class of circumstances]?”)
If, in answer to this, your ethical framework / moral system / etc. says “well, action X is better than action Y, but even better would be action Z”, then you don’t actually have an answer to your question (yet), do you? Because the obvious follow-up is, “Well, ok, so… which of those things should I do? X? Or Y? Or Z…?”
At that point, your morality can give you one of several answers:
“Any of those things is acceptable. You ought to do something in the set { X, Y, Z } (but definitely don’t do action W!); but which of those three things to do, is really up to you. Although, X is more morally praiseworthy than Y, and Z more praiseworthy than X. If you care about that sort of thing.”
“You ought to do the best thing (which is Z).”
“I cannot answer your question. There is no right thing to do, nor is there such a thing as ‘the thing you ought to do’ or even ‘a thing you ought to do’. Some things are simply better than others.”
If your morality gives answer #3, then what you have is actually not a morality, but merely an axiology. In other words, you have a ranking of actions, but what do you do with this ranking? Not clear. If you want your initial question (“what ought I to do?”) answered, you still need a morality!
Now, an axiology can certainly be a component of a morality. For example, if you have a decision rule that says “rank all available actions, then do the one at the top of the ranking”, and you also have a utilitarian axiology, then you can put them together and presto!—you’ve got a morality. (You might have a different decision rule instead, of course, but you do need one.)
Answer #3 plus a “do the best thing, out of this ranking” is, of course, just answer #2, so that’s all fine and good.
In answer #1, we are supposing that we have some axiology (evaluative ranking) that ranks actions Z > X > Y > W, and some decision rule that says “do any of the first three (feel free to select among them according to any criteria you like, including random choice), and you will be doing what you ought to do; but if you do W, you’ll have done a thing you ought not to do”. Now, what can be the nature of this decision rule? There would seem to be little alternative to the rule being a simple threshold of some sort: “actions that are at least this good [in the evaluative ranking] are permissible, while actions worse than this threshold are impermissible”. (In the absence of such a decision rule, you will recall, answer #1 degenerates into answer #3, and ceases to be a morality.)
Well, fair enough. But how to come up with the threshold? On what basis to select it? How to know it’s the right one—and what would it mean for it to be right (or wrong)? Could two moralities with different permissibility thresholds (but with the same, utilitarian, axiology) both be right?
Note that the lower you set the threshold, the more empty your morality becomes of any substantive content. For instance, if you set the threshold at exactly zero—in the sense that actions that do either no good at all, or some good, but in either case no harm, are permitted, while harmful actions are forbidden—then your morality boils down to “do no harm (but doing good is praiseworthy, and the more the better)”. Not a great guide to action!
On the other hand, the higher you set the threshold, the closer you get to answer #2.
And in any event, the questions about how to correctly locate the threshold, remain unanswered…
Ethics/morality is generally understood to be a way to answer the question, “what is the right thing to do [in some circumstance / class of circumstances]?” (or, in other words, “what ought I to do [in this circumstance / class of circumstances]?”)
If, in answer to this, your ethical framework / moral system / etc. says “well, action X is better than action Y, but even better would be action Z”, then you don’t actually have an answer to your question (yet), do you? Because the obvious follow-up is, “Well, ok, so… which of those things should I do? X? Or Y? Or Z…?”
At that point, your morality can give you one of several answers:
“Any of those things is acceptable. You ought to do something in the set { X, Y, Z } (but definitely don’t do action W!); but which of those three things to do, is really up to you. Although, X is more morally praiseworthy than Y, and Z more praiseworthy than X. If you care about that sort of thing.”
“You ought to do the best thing (which is Z).”
“I cannot answer your question. There is no right thing to do, nor is there such a thing as ‘the thing you ought to do’ or even ‘a thing you ought to do’. Some things are simply better than others.”
If your morality gives answer #3, then what you have is actually not a morality, but merely an axiology. In other words, you have a ranking of actions, but what do you do with this ranking? Not clear. If you want your initial question (“what ought I to do?”) answered, you still need a morality!
Now, an axiology can certainly be a component of a morality. For example, if you have a decision rule that says “rank all available actions, then do the one at the top of the ranking”, and you also have a utilitarian axiology, then you can put them together and presto!—you’ve got a morality. (You might have a different decision rule instead, of course, but you do need one.)
Answer #3 plus a “do the best thing, out of this ranking” is, of course, just answer #2, so that’s all fine and good.
In answer #1, we are supposing that we have some axiology (evaluative ranking) that ranks actions Z > X > Y > W, and some decision rule that says “do any of the first three (feel free to select among them according to any criteria you like, including random choice), and you will be doing what you ought to do; but if you do W, you’ll have done a thing you ought not to do”. Now, what can be the nature of this decision rule? There would seem to be little alternative to the rule being a simple threshold of some sort: “actions that are at least this good [in the evaluative ranking] are permissible, while actions worse than this threshold are impermissible”. (In the absence of such a decision rule, you will recall, answer #1 degenerates into answer #3, and ceases to be a morality.)
Well, fair enough. But how to come up with the threshold? On what basis to select it? How to know it’s the right one—and what would it mean for it to be right (or wrong)? Could two moralities with different permissibility thresholds (but with the same, utilitarian, axiology) both be right?
Note that the lower you set the threshold, the more empty your morality becomes of any substantive content. For instance, if you set the threshold at exactly zero—in the sense that actions that do either no good at all, or some good, but in either case no harm, are permitted, while harmful actions are forbidden—then your morality boils down to “do no harm (but doing good is praiseworthy, and the more the better)”. Not a great guide to action!
On the other hand, the higher you set the threshold, the closer you get to answer #2.
And in any event, the questions about how to correctly locate the threshold, remain unanswered…