Arguing about terminal values is pointless, of course. But morality does have a good and useful definition, although some people use the word differently, which muddles the issue.
‘Morality’ (when I use the word) refers to a certain human behavior (with animal analogues). Namely, the judging of (human) behaviors and actions as ‘right’ or ‘wrong’. There are specific mechanisms for that in the human brain—for specific judgments, but more so for feeling moral ‘rightness’ or ‘wrongness’ even when the judgments are largely culturally defined. These judgments, and consequent feelings and reactions, are a human universal which strongly influences behavior and social structures. And so it is interesting and worthy of study and discussion. Furthermore, since humans are capable of modifying their behavior to a large extent after being verbally convinced of new claims, it is worthwhile to discuss moral theories and principles.
We have some in built valuations to behavior and reactions to those evaluations. Humans, like animals, judge behavior, with varying degrees or approval/disapproval, including rewards/punishments, Where we are likely different from animals is that we judge higher order behavior as well—not just the behavior, but the moral reaction to the behavior, then the moral reaction to the moral reaction to the behavior, etc.
Of course morality is real and can studied scientifically, just like anything else about us. The first thing to notice on studying is that we don’t have identical morality, just as we don’t have identical genes or identical histories. Some of the recent work on morality shows it comes in certain dimensions—fairness, autonomy, purity, group loyalty, etc,, and people tend to weigh these different factors both consistently for their own judgments, and differently compared to the judgments of others. I interpret that as us having relatively consistent pattern matching algorithms that identify dimensions of moral saliency, but less consistent weighting of those different dimensions.
That’s the funny thing is that what is termed “objective morality” is transparently nonsense once you look scientifically at morality. We’re not identical—obviously. We don’t have identical moralities—obviously. Any particular statistic of all actual human moralities, for any population of humans, will just be one from an infinitely many possible statistics—obviously. The attempt to “scientifically” identify the One True Statistic, a la Harris, is nonsense on stilts.
That’s a meta-level discussion of morality, which I agree is perfectly appropriate. But unless someone is already a utilitarian, very few, if any, arguments will make ey one.
Why would I want to make someone a utilitarian? I’m not even one myself. I am human; I have different incompatible goals and desires which most likely don’t combine into a single utility function in a way that adds up to normality.
Arguing about terminal values is pointless, of course. But morality does have a good and useful definition, although some people use the word differently, which muddles the issue.
‘Morality’ (when I use the word) refers to a certain human behavior (with animal analogues). Namely, the judging of (human) behaviors and actions as ‘right’ or ‘wrong’. There are specific mechanisms for that in the human brain—for specific judgments, but more so for feeling moral ‘rightness’ or ‘wrongness’ even when the judgments are largely culturally defined. These judgments, and consequent feelings and reactions, are a human universal which strongly influences behavior and social structures. And so it is interesting and worthy of study and discussion. Furthermore, since humans are capable of modifying their behavior to a large extent after being verbally convinced of new claims, it is worthwhile to discuss moral theories and principles.
You’re about where I am.
We have some in built valuations to behavior and reactions to those evaluations. Humans, like animals, judge behavior, with varying degrees or approval/disapproval, including rewards/punishments, Where we are likely different from animals is that we judge higher order behavior as well—not just the behavior, but the moral reaction to the behavior, then the moral reaction to the moral reaction to the behavior, etc.
Of course morality is real and can studied scientifically, just like anything else about us. The first thing to notice on studying is that we don’t have identical morality, just as we don’t have identical genes or identical histories. Some of the recent work on morality shows it comes in certain dimensions—fairness, autonomy, purity, group loyalty, etc,, and people tend to weigh these different factors both consistently for their own judgments, and differently compared to the judgments of others. I interpret that as us having relatively consistent pattern matching algorithms that identify dimensions of moral saliency, but less consistent weighting of those different dimensions.
That’s the funny thing is that what is termed “objective morality” is transparently nonsense once you look scientifically at morality. We’re not identical—obviously. We don’t have identical moralities—obviously. Any particular statistic of all actual human moralities, for any population of humans, will just be one from an infinitely many possible statistics—obviously. The attempt to “scientifically” identify the One True Statistic, a la Harris, is nonsense on stilts.
That’s a meta-level discussion of morality, which I agree is perfectly appropriate. But unless someone is already a utilitarian, very few, if any, arguments will make ey one.
Why would I want to make someone a utilitarian? I’m not even one myself. I am human; I have different incompatible goals and desires which most likely don’t combine into a single utility function in a way that adds up to normality.