Why would you try to do away with your personal preferences, what makes them inferior (edit: speaking as one specific agent) to some blended average case of myriads of other humans? (Is it because of your mirror neurons? ;-)
If you have a preference for morality, being moral is not doing away with that prrefence: it is allowing your altruistic prefences to override your selfish ones.
You may be on the receving end of someone else’s self sacrifice at some point
Certainly, but in that case your preference for the moral action is your personal preference, which is your ‘selfish’ preference. No conflict there. You should always do that which maximizes your utility function. If you call that moral, we’re in full agreement. If your utility function is maximized by caring about someone else’s utility function, go for it. I do, too.
That’s nice. Why would that cause me to do things which I do not overall prefer to do? Or do you say you always value that which you call moral the most?
Certainly, but in that case your preference for the moral action is your personal preference, which is your ‘selfish’ preference.
I can make a quite clear distinction between my preferences relating to an apersonal loving-kindness towards the universe in general, and the preferences that center around my personal affections and likings.
You keep trying to do away with a distinction that has huge predictive ability: a distinction that helps determine what people do, why they do it, how they feel about doing it, and how they feel after doing it.
If your model of people’s psychology conflates morality and non-moral preferences, your model will be accurate only for the most amoral of people.
If you have a preference for morality, being moral is not doing away with that prrefence: it is allowing your altruistic prefences to override your selfish ones.
You may be on the receving end of someone else’s self sacrifice at some point
Certainly, but in that case your preference for the moral action is your personal preference, which is your ‘selfish’ preference. No conflict there. You should always do that which maximizes your utility function. If you call that moral, we’re in full agreement. If your utility function is maximized by caring about someone else’s utility function, go for it. I do, too.
That’s nice. Why would that cause me to do things which I do not overall prefer to do? Or do you say you always value that which you call moral the most?
I can make a quite clear distinction between my preferences relating to an apersonal loving-kindness towards the universe in general, and the preferences that center around my personal affections and likings.
You keep trying to do away with a distinction that has huge predictive ability: a distinction that helps determine what people do, why they do it, how they feel about doing it, and how they feel after doing it.
If your model of people’s psychology conflates morality and non-moral preferences, your model will be accurate only for the most amoral of people.