Maybe better to say that Morality is a certain species of valuation.
It is a valuation of behavior, associated with different behaviors and emotions depending on the result of the valuation. When the valuation is positive, you get various flavors of increased liking and you reward, and then the valuation is negative, you get various flavors of disliking and you punish.
Note that the valuations are multiordinal—you may punish the person exhibiting the behavior, and punish those who don’t punish the behavior, and punish those who don’t say that they would punish behavior, …
We’re social creatures. We have evolved mechanisms for interacting with others. You have moral pattern recognition algorithms in your head which fire for some types of behavior, eliciting the moral emotional and behavioral reactions in turn. Being adaptive, learning creatures, those mechanisms are adaptable.
Obert tries to attack this in a number of ways.
First, by the language we use. We use “I want” in different contexts than “that’s right”, with different associated emotions and acitons. He’s correct, but it doesn’t really make his point. I want is also different from “that’s yummy”. Used for different kinds of valuations, resulting in different kinds of emotions and actions depending on how the valuation turns out. Does that make “yummy” not a preference, not a valuation?
Same thing for Obert’s subsequent questioning of Surhan’s psychological account of I want versus It is right. I agree with Obert’s comment that Surhan’s account is unrealistic. I agree that when someone says “It is moral”, they are usually pinging different valuations than when they say “I want”. Do we want people to do what is moral? Usually. But we also use want when we want to distinguish preferences from moral preferences as well, to distinguish the general from the specific.
Time and time again, Obert tries to make his point by showing that we don’t use “moral’ in the same way as ”
It’s useful to also distinguish between want and yummy, and similarly useful to distinguish between want and moral. A wise fellow once said:
Well, it may turn out that the moral thing to do was not the right thing to do.
Old Jean Luc makes a good point. Moral, as pinging our behavioral pattern matchers, pings only a subset of our values. When taken in total, we may not prefer the moral behavior over all alternatives. The yummy food may not be the right food, and the moral action may not be the right action. Life is full of trade offs.
Obert goes on to attack Subhan’s contention that morality is what “society wants”. I have an aversion to this view, but I’d note that recent work on analyzing the dimensions of morality has shown that for some people, obedience to the tribe is a very strong component of their morality. They’ve come up with six moral dimensions (or so), and found that people have fairly consistent preferences between the different dimensions. My morality algorithm rates social conformity low, and autonomy high.
I’d expect we could find the same kinds of patterns with yummy. You have different types of taste buds, and everyone’s yummy detector will show different relative weightings for activation of the different types of taste buds. (Different weightings for different combinations too. I wonder if they’ve done that kind of combinatoric analysis for morality yet.)
The final attack on morality as preference concerned the advancement of morality.
Notice how this works. We evaluate all time, us versus all the different thems, and find that by our evalution, we’ve advanced. Hmmm. Do you think that people from a thousand years ago would necessarily agree with that evaluation? No? Oh, they’re wrong. I see. But they’d say that we’re wrong, right? “Yeah yeah, but they’re really the ones who are wrong.” Got it.
But I think advancement is actually possible, and even to be expected, on the preference account. People aren’t all knowing. Over the years, you gain experience. It shouldn’t be surprising that people get better at optimizing some valuation over time. Nor should it be surprising that the valuations change over time as our situation changes.
One thing I disagree with, which will likely be empirically verifiable, if it hasn’t already been verified.
“I am not so sure; human cognitive psychology has not had time to change evolutionarily over that period.
I think it has had plenty of time. Not just our psychology, which one could partially attribute to societal changes, but the statistics of the genetic pool. Saw a recent study on the differential birth and child survival rates by income in the 18 century (19th?), We should expect income to have some correlates with genetics. Large differential birth rates in populations with genetic differences mean large shifts in population genetics.
And considering the moral dimension in particular, since punishment and reward are large parts of the moral response, that gives a directed component to genetic evolution. Kill off everyone you see displaying a certain phenotype and breed only the phenotypes you like, and you can make a lot of progress very quickly.
But this gets rather tiresome. I don’t think Subhan is a worthy champion for “morality as preference”, so cleaning up his mess doesn’t really amount to much. Likely Surhan would feel the same about me. That’s the limitation of these kinds of Socratic dialogues—they’re only as convincing as the champion that loses.
I’ve tried instead to give some positive account of my views of morality as preference, how it works, and how it answers the usual objections against it, along with the commentary on Obert and Subhan. I really don’t think it’s that complicated once you treat human beings as evolved social creatures.
Maybe better to say that Morality is a certain species of valuation.
It is a valuation of behavior, associated with different behaviors and emotions depending on the result of the valuation. When the valuation is positive, you get various flavors of increased liking and you reward, and then the valuation is negative, you get various flavors of disliking and you punish.
Note that the valuations are multiordinal—you may punish the person exhibiting the behavior, and punish those who don’t punish the behavior, and punish those who don’t say that they would punish behavior, …
We’re social creatures. We have evolved mechanisms for interacting with others. You have moral pattern recognition algorithms in your head which fire for some types of behavior, eliciting the moral emotional and behavioral reactions in turn. Being adaptive, learning creatures, those mechanisms are adaptable.
Obert tries to attack this in a number of ways.
First, by the language we use. We use “I want” in different contexts than “that’s right”, with different associated emotions and acitons. He’s correct, but it doesn’t really make his point. I want is also different from “that’s yummy”. Used for different kinds of valuations, resulting in different kinds of emotions and actions depending on how the valuation turns out. Does that make “yummy” not a preference, not a valuation?
Same thing for Obert’s subsequent questioning of Surhan’s psychological account of I want versus It is right. I agree with Obert’s comment that Surhan’s account is unrealistic. I agree that when someone says “It is moral”, they are usually pinging different valuations than when they say “I want”. Do we want people to do what is moral? Usually. But we also use want when we want to distinguish preferences from moral preferences as well, to distinguish the general from the specific.
Time and time again, Obert tries to make his point by showing that we don’t use “moral’ in the same way as ”
It’s useful to also distinguish between want and yummy, and similarly useful to distinguish between want and moral. A wise fellow once said:
Old Jean Luc makes a good point. Moral, as pinging our behavioral pattern matchers, pings only a subset of our values. When taken in total, we may not prefer the moral behavior over all alternatives. The yummy food may not be the right food, and the moral action may not be the right action. Life is full of trade offs.
Obert goes on to attack Subhan’s contention that morality is what “society wants”. I have an aversion to this view, but I’d note that recent work on analyzing the dimensions of morality has shown that for some people, obedience to the tribe is a very strong component of their morality. They’ve come up with six moral dimensions (or so), and found that people have fairly consistent preferences between the different dimensions. My morality algorithm rates social conformity low, and autonomy high.
I’d expect we could find the same kinds of patterns with yummy. You have different types of taste buds, and everyone’s yummy detector will show different relative weightings for activation of the different types of taste buds. (Different weightings for different combinations too. I wonder if they’ve done that kind of combinatoric analysis for morality yet.)
The final attack on morality as preference concerned the advancement of morality.
Notice how this works. We evaluate all time, us versus all the different thems, and find that by our evalution, we’ve advanced. Hmmm. Do you think that people from a thousand years ago would necessarily agree with that evaluation? No? Oh, they’re wrong. I see. But they’d say that we’re wrong, right? “Yeah yeah, but they’re really the ones who are wrong.” Got it.
But I think advancement is actually possible, and even to be expected, on the preference account. People aren’t all knowing. Over the years, you gain experience. It shouldn’t be surprising that people get better at optimizing some valuation over time. Nor should it be surprising that the valuations change over time as our situation changes.
One thing I disagree with, which will likely be empirically verifiable, if it hasn’t already been verified.
I think it has had plenty of time. Not just our psychology, which one could partially attribute to societal changes, but the statistics of the genetic pool. Saw a recent study on the differential birth and child survival rates by income in the 18 century (19th?), We should expect income to have some correlates with genetics. Large differential birth rates in populations with genetic differences mean large shifts in population genetics.
And considering the moral dimension in particular, since punishment and reward are large parts of the moral response, that gives a directed component to genetic evolution. Kill off everyone you see displaying a certain phenotype and breed only the phenotypes you like, and you can make a lot of progress very quickly.
But this gets rather tiresome. I don’t think Subhan is a worthy champion for “morality as preference”, so cleaning up his mess doesn’t really amount to much. Likely Surhan would feel the same about me. That’s the limitation of these kinds of Socratic dialogues—they’re only as convincing as the champion that loses.
I’ve tried instead to give some positive account of my views of morality as preference, how it works, and how it answers the usual objections against it, along with the commentary on Obert and Subhan. I really don’t think it’s that complicated once you treat human beings as evolved social creatures.