… to decide that something is morally significant is equivalent to deciding that we would not self-modify to avoid noticing that significance
What do you think of the idea that we consider something a moral value rather than a typical preference if we consider that it is in danger of being modified?
For example, murder is the example often thrown about as something that is universally immoral. However, this is also something that humans do. While humans often say that they shouldn’t murder, they often decide in specific circumstances that taking a life is something they should do. This value to not kill seems particularly modifiable.
(I said something similar yesterday but reading that post I see I didn’t communicate well.)
As sets, we can think of all of our preferences contained within a set P. Of those preferences, we can either value them to some extent feel indifferently about them. Let these be sets ‘PV’ and ‘P~V’. (For example, I don’t care which color I prefer or whether I have allergies, so these preferences would be in ‘P~V’. Not wanting people to die and wanting to eat when I’m hungry so I don’t starve would be in ‘PV’.) Then within ‘PV’ there is a further division: preferences are either stable (‘PVS’) or not stable (‘PV~S’). Only preferences in the last category PV~S would be considered moral preferences.
What do you think of the idea that we consider something a moral value rather than a typical preference if we consider that it is in danger of being modified?
For example, murder is the example often thrown about as something that is universally immoral. However, this is also something that humans do. While humans often say that they shouldn’t murder, they often decide in specific circumstances that taking a life is something they should do. This value to not kill seems particularly modifiable.
(I said something similar yesterday but reading that post I see I didn’t communicate well.)
As sets, we can think of all of our preferences contained within a set P. Of those preferences, we can either value them to some extent feel indifferently about them. Let these be sets ‘PV’ and ‘P~V’. (For example, I don’t care which color I prefer or whether I have allergies, so these preferences would be in ‘P~V’. Not wanting people to die and wanting to eat when I’m hungry so I don’t starve would be in ‘PV’.) Then within ‘PV’ there is a further division: preferences are either stable (‘PVS’) or not stable (‘PV~S’). Only preferences in the last category PV~S would be considered moral preferences.