I’d really like to know what these folks are thinking. Are they using ‘morality’ in the way Nietzsche did when he called himself an amoralist? Or do they really think there’s nothing to the concepts of ‘good/bad’ and ‘right/wrong’?
Supposing one wants to open a pickle jar, and one considers the acts of (a)twisting the top until it comes off, (b)smashing the jar with a hammer, and (c) cutting off one’s own hand with a chainsaw, do these folks think (for instance) that (a) is no better than (c)?
It’s probably more of a statement about our jargon: most OB veterans are probably on board with the concept that “morality” should be used to generally talk about our goal systems and decision processes, and not as if it implied naive moral realism.
I’d suspect that some of the 14 are relative newcomers who thought that the question was asking whether they accepted some form of moral realism or not. I’d also expect that some of them are veterans who simply disagreed that the term “morality” should be extended in the above fashion.
Someone can believe in an action being good or bad for a purpose without believing that there is any ultimate reason to choose one purpose over another. Once you’ve assumed very high-level goals, further discussion is about effectiveness rather than morality. Further, except for sub-goals, where goal X is required or useful for reaching goal Y, rationality doesn’t have anything to say about “choosing” goals, which means you cannot rationally argue about morality with someone whose highest goal conflicts with your own.
But ethics doesn’t just apply to these high-level goals. A utilitarian is committed to whatever action generates the most overall net utility—even when choosing how to (for instance) open a pickle jar. (of course, it’s been rightly argued that even a true utilitarian might do best in fact to not consider the question while making the decision, due to the cost of considering the decision). If it turns out (b) results in more overall net utility than (a), then the utilitarian says (a) was the wrong thing to do.
If someone nonetheless thinks one should (a) instead of (b) because one should choose the option that most effectively reaches one’s goals without terrible side-effects, then that person would disagree with the utilitarian above about ethics. If you don’t believe in ethics, then you have no grounds for disagreeing with the utilitarian.
I’d really like to know what these folks are thinking. Are they using ‘morality’ in the way Nietzsche did when he called himself an amoralist? Or do they really think there’s nothing to the concepts of ‘good/bad’ and ‘right/wrong’?
Supposing one wants to open a pickle jar, and one considers the acts of (a)twisting the top until it comes off, (b)smashing the jar with a hammer, and (c) cutting off one’s own hand with a chainsaw, do these folks think (for instance) that (a) is no better than (c)?
I would guess that they would say that one can certainly have preferences, without there being anything worth calling “morality”.
It’s probably more of a statement about our jargon: most OB veterans are probably on board with the concept that “morality” should be used to generally talk about our goal systems and decision processes, and not as if it implied naive moral realism.
I’d suspect that some of the 14 are relative newcomers who thought that the question was asking whether they accepted some form of moral realism or not. I’d also expect that some of them are veterans who simply disagreed that the term “morality” should be extended in the above fashion.
Someone can believe in an action being good or bad for a purpose without believing that there is any ultimate reason to choose one purpose over another. Once you’ve assumed very high-level goals, further discussion is about effectiveness rather than morality. Further, except for sub-goals, where goal X is required or useful for reaching goal Y, rationality doesn’t have anything to say about “choosing” goals, which means you cannot rationally argue about morality with someone whose highest goal conflicts with your own.
But ethics doesn’t just apply to these high-level goals. A utilitarian is committed to whatever action generates the most overall net utility—even when choosing how to (for instance) open a pickle jar. (of course, it’s been rightly argued that even a true utilitarian might do best in fact to not consider the question while making the decision, due to the cost of considering the decision). If it turns out (b) results in more overall net utility than (a), then the utilitarian says (a) was the wrong thing to do.
If someone nonetheless thinks one should (a) instead of (b) because one should choose the option that most effectively reaches one’s goals without terrible side-effects, then that person would disagree with the utilitarian above about ethics. If you don’t believe in ethics, then you have no grounds for disagreeing with the utilitarian.
See e.g. non-cognitivism and error theory.