“values” is a term for the types of stories that give us pleasure.
It really depends on what you mean by “pleasure”. If pleasure is just “things you want”, then almost tautologically meaning comes from pleasure, since you want meaning.
If instead, pleasure is a particular phenomological feeling similar to feeling happy or content, I think that many of us actually WANT the meaning that comes from living our values, and it also happens to give us pleasure. I think that there are also people that just WANT the pleasure, and if they could get it while ignoring their values, they would.
I call this the”Heaven/Enlightenment” dichotomy, and I think it’s a frequent misunderstanding.
I’ve seen some people say “all we care about is feeling good, and people who think they care about the outside world are confused.” I’ve also seen people say “All we care about is meeting our values, and people who think it’s about feeling good are confused.”
Personally, I think that people are more towards one side of the spectrum or the other along different dimensions, and I’m inclined to believe both sides about their own experience.
I think we can consider pleasure, along with altruism, consistency, rationality, fitting the categorical imperative, and so forth as moral goods.
People have different preferences for how they trade off one against the other when they’re in conflict. But they of course prefer them not to be in conflict.
What I’m interested is not what weights people assign to these values—I agree with you that they are diverse—but on what causes people to adopt any set of preferences at all.
My hypothesis is that it’s pleasure. Or more specifically, whatever moral argument most effectively hijacks an individual person’s psychological reward system.
So if you wanted to understand why another person considers some strange action or belief to be moral, you’d need to understand why the belief system that they hold gives them pleasure.
Some predictions from that hypothesis:
People who find a complex moral argument unpleasant to think about won’t adopt it.
People who find a moral community pleasant to be in will adopt its values.
A moral argument might be very pleasant to understand, rehearse, and think about, and unpleasant to abandon. It might also be unpleasant in the actions it motivates its subscriber to undertake. It will continue to exist in their mind if the balance of pleasure in belief to displeasure in action is favorable.
Deprogramming somebody from a belief system you find abhorrent is best done by giving them alternative sources of “moral pleasure.” Examples of this include the ways people have deprogrammed people from cults and the KKK, by including them in their social gatherings, including Jewish religious dinners, and making them feel welcome. Eventually, the pleasure of adopting the moral system of that shared community displaces whatever pleasure they were deriving from their former belief system.
Paying somebody in money and status to uphold a given belief system is a great way to keep them doing it, no matter how silly it is.
If you want people do do more of a painful but necessary action X, helping them feel compensating forms of moral pleasure is a good way to go about it. Effective Altruism is a great example. By helping people understand how effective donations or direct work can save lives, they give people a feeling of heroism. Its failure mode is making people feel like the demands are impossible, and the displeasure of that disappointment is a primary issue in that community.
Another good way to encourage more of a painful but necessary action X is to teach people how to shape it into a good story that they and others will appreciate in the telling. Hence the story-fication of charity.
Many people don’t give to charity because their community disparages it as “do-gooderism,” as futile, as bragging, or as a tasteless display of wealth and privilege. If you want people to give more to charity, you have to give people a way of being able to enjoy talking about their charitable contributions. One solution is to form a community in which that’s openly accepted and appreciated. Like EA.
Likewise for the rationality community. If you want people to do more good epistemology outside of academia, give them an outlet where that’ll be appreciated and an axis from where it can be spread.
My hypothesis is that it’s pleasure. Or more specifically, whatever moral argument most effectively hijacks an individual person’s psychological reward system.
This just kicks the can down the road on you defining pleasure, all of my points still apply
If instead, pleasure is a particular phenomological feeling similar to feeling happy or content, I think that many of us actually WANT the meaning that comes from living our values, and it also happens to give us pleasure.
That is, I think it’s possible to say that pleasure kicks in around values that we really want, rather than vice versa.
It really depends on what you mean by “pleasure”. If pleasure is just “things you want”, then almost tautologically meaning comes from pleasure, since you want meaning.
If instead, pleasure is a particular phenomological feeling similar to feeling happy or content, I think that many of us actually WANT the meaning that comes from living our values, and it also happens to give us pleasure. I think that there are also people that just WANT the pleasure, and if they could get it while ignoring their values, they would.
I call this the”Heaven/Enlightenment” dichotomy, and I think it’s a frequent misunderstanding.
I’ve seen some people say “all we care about is feeling good, and people who think they care about the outside world are confused.” I’ve also seen people say “All we care about is meeting our values, and people who think it’s about feeling good are confused.”
Personally, I think that people are more towards one side of the spectrum or the other along different dimensions, and I’m inclined to believe both sides about their own experience.
I think we can consider pleasure, along with altruism, consistency, rationality, fitting the categorical imperative, and so forth as moral goods.
People have different preferences for how they trade off one against the other when they’re in conflict. But they of course prefer them not to be in conflict.
What I’m interested is not what weights people assign to these values—I agree with you that they are diverse—but on what causes people to adopt any set of preferences at all.
My hypothesis is that it’s pleasure. Or more specifically, whatever moral argument most effectively hijacks an individual person’s psychological reward system.
So if you wanted to understand why another person considers some strange action or belief to be moral, you’d need to understand why the belief system that they hold gives them pleasure.
Some predictions from that hypothesis:
People who find a complex moral argument unpleasant to think about won’t adopt it.
People who find a moral community pleasant to be in will adopt its values.
A moral argument might be very pleasant to understand, rehearse, and think about, and unpleasant to abandon. It might also be unpleasant in the actions it motivates its subscriber to undertake. It will continue to exist in their mind if the balance of pleasure in belief to displeasure in action is favorable.
Deprogramming somebody from a belief system you find abhorrent is best done by giving them alternative sources of “moral pleasure.” Examples of this include the ways people have deprogrammed people from cults and the KKK, by including them in their social gatherings, including Jewish religious dinners, and making them feel welcome. Eventually, the pleasure of adopting the moral system of that shared community displaces whatever pleasure they were deriving from their former belief system.
Paying somebody in money and status to uphold a given belief system is a great way to keep them doing it, no matter how silly it is.
If you want people do do more of a painful but necessary action X, helping them feel compensating forms of moral pleasure is a good way to go about it. Effective Altruism is a great example. By helping people understand how effective donations or direct work can save lives, they give people a feeling of heroism. Its failure mode is making people feel like the demands are impossible, and the displeasure of that disappointment is a primary issue in that community.
Another good way to encourage more of a painful but necessary action X is to teach people how to shape it into a good story that they and others will appreciate in the telling. Hence the story-fication of charity.
Many people don’t give to charity because their community disparages it as “do-gooderism,” as futile, as bragging, or as a tasteless display of wealth and privilege. If you want people to give more to charity, you have to give people a way of being able to enjoy talking about their charitable contributions. One solution is to form a community in which that’s openly accepted and appreciated. Like EA.
Likewise for the rationality community. If you want people to do more good epistemology outside of academia, give them an outlet where that’ll be appreciated and an axis from where it can be spread.
This just kicks the can down the road on you defining pleasure, all of my points still apply
That is, I think it’s possible to say that pleasure kicks in around values that we really want, rather than vice versa.