It has been stated that this post shows that all values are moral values (or that there is no difference between morality and valuation in general, or..) in contrast with the common sense view that there are clear examples of morally neutral preferences, such as prefences for differnt flavours of ice cream.
I am not convinced by the explanation, since it also applies ot non-moral prefrences. If I have a lower priority non moral prefence to eat tasty food, and a higher priority preference to stay slim, I need to consider my higher priority preference when wishing for yummy ice cream.
To be sure, an agent capable of acting morally will have morality among their higher priority preferences—it has to be among the higher order preferences, becuase it has to override other preferences for the agent to act morally. Therefore, when they scan their higher prioriuty prefences, they will happen to encounter their moral preferences. But that does not mean any preference is necessarily a moral preference. And their moral prefences override other preferences which are therefore non-moral, or at least less moral.
There is no safe wish smaller than an entire human morality.
There is no safe wish smaller than all the subset of value structure, moral or amoral, above it in priority. The subset below doesn’t matter. However, a value structure need not be moral at all, and the lower stories will probably be amoral even if the upper stories are not.
Therefore morality is in general a subset of prefences, as common sense maintained all along.
It has been stated that this post shows that all values are moral values (or that there is no difference between morality and valuation in general, or..) in contrast with the common sense view that there are clear examples of morally neutral preferences, such as prefences for differnt flavours of ice cream.
I am not convinced by the explanation, since it also applies ot non-moral prefrences. If I have a lower priority non moral prefence to eat tasty food, and a higher priority preference to stay slim, I need to consider my higher priority preference when wishing for yummy ice cream.
To be sure, an agent capable of acting morally will have morality among their higher priority preferences—it has to be among the higher order preferences, becuase it has to override other preferences for the agent to act morally. Therefore, when they scan their higher prioriuty prefences, they will happen to encounter their moral preferences. But that does not mean any preference is necessarily a moral preference. And their moral prefences override other preferences which are therefore non-moral, or at least less moral.
There is no safe wish smaller than all the subset of value structure, moral or amoral, above it in priority. The subset below doesn’t matter. However, a value structure need not be moral at all, and the lower stories will probably be amoral even if the upper stories are not.
Therefore morality is in general a subset of prefences, as common sense maintained all along.