While morality seems closely related to (a) signaling to other people that you have the same values and are trustworthy and won’t defect or (b) being good to earn “points”, neither of these definitions feel right to me.
I hesitate to take (a) because morality feels more like a personal, internal institution that operates for the interests of the agent. Even if the outcome is for the interests of society, and that this is some explanation for why it evolved, that doesn’t seem to reflect how it works.
I feel that (b) seems to miss the point: we aren’t good to pragmatically “get points” for something, with a use of a ‘morality’ term separate from pragmatism or cooperation, we’re acknowledging that points are given based on something more subtle and complex than pragmatism or cooperation (e.g., ‘God’s preferences’ is a handle). (I mean, we’re good because we want to be, and ‘getting points’ is just a way of describing this. We wouldn’t do anything unless it meant getting some kind of points, either real or abstract.)
I wrote down a hypothesis for morality a week ago and decided I would think about it later.
to me, morality means not disastrously/majorly subverting another’s utility function for a trivial increase in my own utility.
I’m considering that moral means not subverting one’s own utility function.
Humans seems to have a lot of plasticity in choosing what their values and what their values are about. We can think about things a certain way and develop perspectives that lead to values and actions that are extremely different from our initial ideas of what moral is. (For example, people presumably just like myself have torn children from their parents and sent them to starve in death camps.) It stands to reason we would need a strong internal protection system—some system of checks and balances—to keep our values intact.
Suppose we consider that we should always do whatever is pragmatically correct (pragmatic behavior includes altruistic, cooperative behavior) except when an action is suspected to subvert our utility function. I imagine that our utility function could be subverted if an action makes us feel hypocritical, and thus forces us to devalue a value that we had.
For example, we all value other people (especially particular people). But if we would kill someone for pragmatic reasons (that is, we have some set of reasons for wanting to do so that outweigh reasons for not wanting to), we can still decide we wouldn’t kill them for this one other reason: we want to value not killing other people.
This is very subtle. Already, we do value not killing other people, but this has already been weighted in the decision and we still decide we would—pragmatically—commit the murder. But we realize that if we commit the murder for these pragmatic reasons,even though it seems for the best given our current utility function, we can no longer pretend that we value life so much and we may view a slippery slope where it will be easier to kill someone in the future because now we know this value isn’t so strong.
If we do commit the murder anyway, because we are pragmatic rather than moral, then the role of guilt could be to realign and reset our values. “I killed him because I had to but I feel really bad about it; this means I really do value life.”
So finally, morality could be about protecting values we have that aren’t inherently stable.
While morality seems closely related to (a) signaling to other people that you have the same values and are trustworthy and won’t defect or (b) being good to earn “points”, neither of these definitions feel right to me.
I hesitate to take (a) because morality feels more like a personal, internal institution that operates for the interests of the agent. Even if the outcome is for the interests of society, and that this is some explanation for why it evolved, that doesn’t seem to reflect how it works.
I feel that (b) seems to miss the point: we aren’t good to pragmatically “get points” for something, with a use of a ‘morality’ term separate from pragmatism or cooperation, we’re acknowledging that points are given based on something more subtle and complex than pragmatism or cooperation (e.g., ‘God’s preferences’ is a handle). (I mean, we’re good because we want to be, and ‘getting points’ is just a way of describing this. We wouldn’t do anything unless it meant getting some kind of points, either real or abstract.)
I wrote down a hypothesis for morality a week ago and decided I would think about it later.
Nazgulnarsil wrote:
I’m considering that moral means not subverting one’s own utility function.
Humans seems to have a lot of plasticity in choosing what their values and what their values are about. We can think about things a certain way and develop perspectives that lead to values and actions that are extremely different from our initial ideas of what moral is. (For example, people presumably just like myself have torn children from their parents and sent them to starve in death camps.) It stands to reason we would need a strong internal protection system—some system of checks and balances—to keep our values intact.
Suppose we consider that we should always do whatever is pragmatically correct (pragmatic behavior includes altruistic, cooperative behavior) except when an action is suspected to subvert our utility function. I imagine that our utility function could be subverted if an action makes us feel hypocritical, and thus forces us to devalue a value that we had.
For example, we all value other people (especially particular people). But if we would kill someone for pragmatic reasons (that is, we have some set of reasons for wanting to do so that outweigh reasons for not wanting to), we can still decide we wouldn’t kill them for this one other reason: we want to value not killing other people.
This is very subtle. Already, we do value not killing other people, but this has already been weighted in the decision and we still decide we would—pragmatically—commit the murder. But we realize that if we commit the murder for these pragmatic reasons,even though it seems for the best given our current utility function, we can no longer pretend that we value life so much and we may view a slippery slope where it will be easier to kill someone in the future because now we know this value isn’t so strong.
If we do commit the murder anyway, because we are pragmatic rather than moral, then the role of guilt could be to realign and reset our values. “I killed him because I had to but I feel really bad about it; this means I really do value life.”
So finally, morality could be about protecting values we have that aren’t inherently stable.