“Cover how one should act in all situations” (instead of dealing only with ‘moral’ ones)
Contain no contradictions
“Cover all situations in which somebody should perform an action, even if this “somebody” isn’t a human being”
In other words, a decision theory, complete with an algorithm (so you can actually use it), and a full set of terminal goals. Not what anyone else means by “moral theory’.
The author is far from alone in his view that both a complete rightness criterion and a consistent decision method must be required of all serious moral theories.
Among hedonistic utilitarians it’s quite normal to demand both completeness, to include all (human) situations, and consistency, to avoid contradictions. The author simply describes what’s normal among consequentialists, who, after all, are more or less the rational ones. ;-) There’s one interesting exception though! The demand to include all situations, including the non-human ones, is radical, and quite hard a challenge for hedonistic utilitarians, who do have problems with the bloodthirsty predators of the jungle.
Among hedonistic utilitarians it’s quite normal to demand both completeness
Utilitarianism provides no guidance on many decisions: any decision where both actions produce the same utility.
Even if it is a complete theory, I don’t think that completeness is demanded of the theory; rather it’s merely a tenet of it. I can’t think of any good a priori reasons to expect a theory to be complete in the first place.
Two different actions don’t produce exactly the same utility, but even if they did it wouldn’t be any problem. To say that you may chose any one of two actions when it doesn’t matter which one you chose since they have the same value, isn’t to give “no guidance”. Consequentialists want to maximize the intrinsic value, and both these actions do just that.
Of course hedonistic utilitarianism doesn’t require completeness, which, by the way, isn’t one of its tenets either. But since it is complete, which of course is better than being incomplete, it’s normal for hedonistic utilitarianists to hold the metaethical view that a proper moral theory should answer all of the question: “Which actions ought to be performed?” What could be so good with answering it incompletely?
To say that you may chose any one of two actions when it doesn’t matter which one you chose since they have the same value, isn’t to give “no guidance”.
Proves my point. That’s no different from how most most moral theories respond to questions like “which shirt do I wear”. So this ‘completeness criterion’ has to be made so weak as to be uninteresting.
Nobody is calling “a universal decision theory a moral theory”. According to hedonistic utilitarianism, and indeed all consequentialism, all actions are morally significant.
‘Moral’ means regarding opinions of which actions ought to be performed.
So “morals” is used to mean the same as “values” or “goals” or “preferences”. It’s not how I’m used to encountering the word, and it’s confusing in comparison to how it’s used in other contexts. Humans have separate moral and a-moral desires (and beliefs, emotions, judgments, etc) and when discussing human behavior, as opposed to idealized or artificial behavior, the distinction is useful.
Of course every field or community is allowed to redefine existing terminology, and many do. But now, whenever I encounter the word “moral”, I’ll have to remind myself I may be misunderstanding the intended meaning (in either direction).
In other words, a decision theory, complete with an algorithm (so you can actually use it), and a full set of terminal goals. Not what anyone else means by “moral theory’.
When people talk about moral theories they refer to systems which describe the way that one ought to act or the type of person that one ought to be. Sure, some moral theories can be called “a decision theory, complete with an algorithm (so you can actually use it), and a full set of terminal goals,” but I don’t see how that changes anything about the definition of a moral theory.
The question needs to cover how one should act in all situations, simply because we want to answer the question. Otherwise we’re left without guidance and with uncertainty.
Well first, we normally don’t think of questions like which clothes to wear as being moral. Secondly, we’re not left without guidance when morality leaves these issues alone: we have pragmatic reasons, for instance. Thirdly, we will always have to deal with uncertainty due to empirical uncertainty, so it must be acceptable anyway.
There is one additional issue I would like to highlight, an issue which rarely is mentioned or discussed. Commonly, normative ethics only concerns itself with human actions. The subspecies homo sapiens sapiens has understandably had a special place in philosophical discussions, but the question is not inherently only about one subspecies in the universe. The completeness criterion covers all situations in which somebody should perform an action, even if this “somebody” isn’t a human being. Human successors, alien life in other solar systems, and other species on Earth shouldn’t be arbitrarily excluded.
I’d agree, but accounts of normativity which are mind- or society-dependent, such as constructivism would have reason to make accounts of ethics for humanity different from accounts of ethics for nonhumans.
It seems like an impossible task for any moral theory based on virtue or deontology to ever be able to fulfil the criteria of completeness and consistency
I’m not sure I agree there. Usually these theories don’t because the people who construct them disagree with some of the criteria, especially #1. But it doesn’t seem difficult to make a complete and demanding form of virtue ethics or deontology.
The author says a moral theory should:
“Cover how one should act in all situations” (instead of dealing only with ‘moral’ ones)
Contain no contradictions
“Cover all situations in which somebody should perform an action, even if this “somebody” isn’t a human being”
In other words, a decision theory, complete with an algorithm (so you can actually use it), and a full set of terminal goals. Not what anyone else means by “moral theory’.
The author is far from alone in his view that both a complete rightness criterion and a consistent decision method must be required of all serious moral theories.
Among hedonistic utilitarians it’s quite normal to demand both completeness, to include all (human) situations, and consistency, to avoid contradictions. The author simply describes what’s normal among consequentialists, who, after all, are more or less the rational ones. ;-) There’s one interesting exception though! The demand to include all situations, including the non-human ones, is radical, and quite hard a challenge for hedonistic utilitarians, who do have problems with the bloodthirsty predators of the jungle.
Utilitarianism provides no guidance on many decisions: any decision where both actions produce the same utility.
Even if it is a complete theory, I don’t think that completeness is demanded of the theory; rather it’s merely a tenet of it. I can’t think of any good a priori reasons to expect a theory to be complete in the first place.
Two different actions don’t produce exactly the same utility, but even if they did it wouldn’t be any problem. To say that you may chose any one of two actions when it doesn’t matter which one you chose since they have the same value, isn’t to give “no guidance”. Consequentialists want to maximize the intrinsic value, and both these actions do just that.
Of course hedonistic utilitarianism doesn’t require completeness, which, by the way, isn’t one of its tenets either. But since it is complete, which of course is better than being incomplete, it’s normal for hedonistic utilitarianists to hold the metaethical view that a proper moral theory should answer all of the question: “Which actions ought to be performed?” What could be so good with answering it incompletely?
Proves my point. That’s no different from how most most moral theories respond to questions like “which shirt do I wear”. So this ‘completeness criterion’ has to be made so weak as to be uninteresting.
I’m confused. Is it normal to regard all possible acts and decisions as morally significant, and to call a universal decision theory a moral theory?
What meaning does the word “moral” even have at that point?
Nobody is calling “a universal decision theory a moral theory”. According to hedonistic utilitarianism, and indeed all consequentialism, all actions are morally significant.
‘Moral’ means regarding opinions of which actions ought to be performed.
So “morals” is used to mean the same as “values” or “goals” or “preferences”. It’s not how I’m used to encountering the word, and it’s confusing in comparison to how it’s used in other contexts. Humans have separate moral and a-moral desires (and beliefs, emotions, judgments, etc) and when discussing human behavior, as opposed to idealized or artificial behavior, the distinction is useful.
Of course every field or community is allowed to redefine existing terminology, and many do. But now, whenever I encounter the word “moral”, I’ll have to remind myself I may be misunderstanding the intended meaning (in either direction).
When people talk about moral theories they refer to systems which describe the way that one ought to act or the type of person that one ought to be. Sure, some moral theories can be called “a decision theory, complete with an algorithm (so you can actually use it), and a full set of terminal goals,” but I don’t see how that changes anything about the definition of a moral theory.
Well first, we normally don’t think of questions like which clothes to wear as being moral. Secondly, we’re not left without guidance when morality leaves these issues alone: we have pragmatic reasons, for instance. Thirdly, we will always have to deal with uncertainty due to empirical uncertainty, so it must be acceptable anyway.
I’d agree, but accounts of normativity which are mind- or society-dependent, such as constructivism would have reason to make accounts of ethics for humanity different from accounts of ethics for nonhumans.
I’m not sure I agree there. Usually these theories don’t because the people who construct them disagree with some of the criteria, especially #1. But it doesn’t seem difficult to make a complete and demanding form of virtue ethics or deontology.