While I agree with your philosophical claims, I am dubious about whether treating moral beliefs as “mere preferences” would produce positive outcomes. This is because:
1. Psychologically, I think treating moral beliefs like other preferences will increase people’s willingness to value-drift or even to use value-drifting for personal reasons
2. Regardless of whether we view moral preferences as egoism, we will treat people’s moral preferences differently than we treat others because this preserves cooperation norms
3. These cooperation norms set up a system that makes unnecessary punishment likely regardless of whether we treat morality as egoist. Furthermore, egoist morality might make excessive punishment worse by increasing the number of value-drifters trying to exploit social signalling.
In short, I think egoist perspectives on morality could cause one really big problem (value-drift) without changing the social incentives around excessive punishment (insurance against defection caused by value-drift).
In long, here’s a really lengthy write-up of my thought process of why I’m suspicious about egoism:
1. It sounds like something that would encourage deliberately inducing value drift
While I agree that utilitarianism (and morality in general) can often be described as a form of egoism, I think that reframing our moral systems as egoist (in the sense of “I care about this because it appeals to me personally”) dramatically raises the likelihood of value-drift. We can often reduce the strengths of common preferences through psychological exercises or simply getting used to different situations. If we treat morals as less sacred and more like things that appeal to us, we are more likely to address high-cost moral preferences (i.e. caring about animal welfare enough to become vegetarian or to feel alienated by meat-eaters) by simply deciding the preference causes too much personal suffering and getting rid of it (i.e. by just deciding to be okay with the routine suffering of animals).
Furthermore, from personal experimentation, I can tell you that the above strategy does in fact work and individuals can use it to raise their level of happiness. I’ve also discussed this on my blog where I talk about why I haven’t internalized moral anti-realism.
2. On a meta-level, we should aggressively punish anyone who deliberately induces value-drift
I try to avoid such strategies now but only because of a meta-level belief that deliberately causing value-drift to improve your emotional well-being is morally wrong (or at least wrong in a more sacred way than most preferences). Naively, I could accept the egoist interpretation that not deliberately inducing value-drift is a preference for how I want the world to work and throw it away to (which would be easy because meta-level preferences are more divorced from intuition than object level ones). However, this meta-level belief about not inducing value-drift has some important properties:
1. don’t modify your moral preferences because it would personally benefit you is a really important and probably universal rule in the Kantian sense. The extent to which this rule is believed is (so long as competition exists) is the extent to which whatever group just got into power is willing to exploit everyone else. In other words, it puts an upperbound on the extent to which people are willing to defect against other people.
2. With the exception of public figures, it is very hard to measure whether someone is selfishly inducing value-drift or simply changing their minds.
From 1., we find that we have extreme societal value in treatiing moral preferences as different from ordinary preferences. From 2., we see that enforcing this is really hard. This means that we probably want to approach this issue using The First Offender model and commit to caring a disproportionate amount about stopping people from deliberately allowing value-drift. Practically, we see this commitment emerge when people accuse public figures of hypocrisy. We can also see that apathy towards hypocrisy claims is often couched in the idea that “all public figures are rotten”—an expression of mutual defection.
3. Punishing people with different moral beliefs makes value-drift harder
Because we want to aggressively dissuade value-drift, aggressive punishment is useful not solely to disincentivize a specific moral failing but also to force the punisher to commit to a specific moral belief. This is because:
*People don’t generally like punishing others so punishing someone is a signal that you really care a lot about the thing they did.
*People tend to like tit-for-tat “do unto others as others do unto you” strategies and the action of punishing someone for an action makes you vulnerable to being similarly punished. This is mostly relevant on local and social levels of interaction rather than ones with bigger power gradients.
I personally don’t really like these strategies since they probably cause more suffering than the value they provide in navigating cooperation norms. Moreover, the only reason people would benefit from being punishers in this context is if commiting to a particular belief will raise their status. This disproportionately favors people with consensus beliefs but, more problematically, it also favors people who don’t have strong beliefs but could socially benefit from fostering them (i.e. value-drifters) so long as they never change again. Consequently, I think that mainstreaming egoist attitudes about morality would promote value-drifting in a way that makes mechanisms through which value-drift is prevented perform worse.
Conclusion
The above is a lot but, for me, it’s enough to be very hesitant to the idea of being explicitly egoist. I think egoism could cause a lot of problems with moral value-drift. I also think that the issue of excessive punishment due to moral judgement has deeper drivers than whether humanity describes morality as egoist. I think a better solution would probably just to directly emphasize “no one deserves to suffer” attitudes in morality directly.
It’s interesting that you approach it from the “bad effects of treating moral beliefs as X” rather than “moral beliefs are not X”. Are you saying this is true but harmful, or that it’s untrue (or something else, like neither are true and this is a worse equilibrium)?
I do not understand the argument about value drift, when applied to divergent starting values.
As someone leaning moral anti-realist, I learn roughly towards saying that it’s true but harmful. To be more nuanced, I don’t think its necessarily harmful to believe that moral preferences are like ordinary preferences but I do think that treating moral preferences like other preferences is a worse equilibrium.
If we could accept egoism without treating our own morals differently, then I wouldn’t have a problem with it. However, I think that a lot of the bad (more defection) parts of how we judge other people’s morals are intertwined with good (less defection) ways that we treat our own.
Can you elaborate on what you don’t understand about value drift applied to divergent starting values?
While I agree with your philosophical claims, I am dubious about whether treating moral beliefs as “mere preferences” would produce positive outcomes. This is because:
1. Psychologically, I think treating moral beliefs like other preferences will increase people’s willingness to value-drift or even to use value-drifting for personal reasons
2. Regardless of whether we view moral preferences as egoism, we will treat people’s moral preferences differently than we treat others because this preserves cooperation norms
3. These cooperation norms set up a system that makes unnecessary punishment likely regardless of whether we treat morality as egoist. Furthermore, egoist morality might make excessive punishment worse by increasing the number of value-drifters trying to exploit social signalling.
In short, I think egoist perspectives on morality could cause one really big problem (value-drift) without changing the social incentives around excessive punishment (insurance against defection caused by value-drift).
In long, here’s a really lengthy write-up of my thought process of why I’m suspicious about egoism:
1. It sounds like something that would encourage deliberately inducing value drift
While I agree that utilitarianism (and morality in general) can often be described as a form of egoism, I think that reframing our moral systems as egoist (in the sense of “I care about this because it appeals to me personally”) dramatically raises the likelihood of value-drift. We can often reduce the strengths of common preferences through psychological exercises or simply getting used to different situations. If we treat morals as less sacred and more like things that appeal to us, we are more likely to address high-cost moral preferences (i.e. caring about animal welfare enough to become vegetarian or to feel alienated by meat-eaters) by simply deciding the preference causes too much personal suffering and getting rid of it (i.e. by just deciding to be okay with the routine suffering of animals).
Furthermore, from personal experimentation, I can tell you that the above strategy does in fact work and individuals can use it to raise their level of happiness. I’ve also discussed this on my blog where I talk about why I haven’t internalized moral anti-realism.
2. On a meta-level, we should aggressively punish anyone who deliberately induces value-drift
I try to avoid such strategies now but only because of a meta-level belief that deliberately causing value-drift to improve your emotional well-being is morally wrong (or at least wrong in a more sacred way than most preferences). Naively, I could accept the egoist interpretation that not deliberately inducing value-drift is a preference for how I want the world to work and throw it away to (which would be easy because meta-level preferences are more divorced from intuition than object level ones). However, this meta-level belief about not inducing value-drift has some important properties:
1. don’t modify your moral preferences because it would personally benefit you is a really important and probably universal rule in the Kantian sense. The extent to which this rule is believed is (so long as competition exists) is the extent to which whatever group just got into power is willing to exploit everyone else. In other words, it puts an upperbound on the extent to which people are willing to defect against other people.
2. With the exception of public figures, it is very hard to measure whether someone is selfishly inducing value-drift or simply changing their minds.
From 1., we find that we have extreme societal value in treatiing moral preferences as different from ordinary preferences. From 2., we see that enforcing this is really hard. This means that we probably want to approach this issue using The First Offender model and commit to caring a disproportionate amount about stopping people from deliberately allowing value-drift. Practically, we see this commitment emerge when people accuse public figures of hypocrisy. We can also see that apathy towards hypocrisy claims is often couched in the idea that “all public figures are rotten”—an expression of mutual defection.
3. Punishing people with different moral beliefs makes value-drift harder
Because we want to aggressively dissuade value-drift, aggressive punishment is useful not solely to disincentivize a specific moral failing but also to force the punisher to commit to a specific moral belief. This is because:
*People don’t generally like punishing others so punishing someone is a signal that you really care a lot about the thing they did.
*People tend to like tit-for-tat “do unto others as others do unto you” strategies and the action of punishing someone for an action makes you vulnerable to being similarly punished. This is mostly relevant on local and social levels of interaction rather than ones with bigger power gradients.
I personally don’t really like these strategies since they probably cause more suffering than the value they provide in navigating cooperation norms. Moreover, the only reason people would benefit from being punishers in this context is if commiting to a particular belief will raise their status. This disproportionately favors people with consensus beliefs but, more problematically, it also favors people who don’t have strong beliefs but could socially benefit from fostering them (i.e. value-drifters) so long as they never change again. Consequently, I think that mainstreaming egoist attitudes about morality would promote value-drifting in a way that makes mechanisms through which value-drift is prevented perform worse.
Conclusion
The above is a lot but, for me, it’s enough to be very hesitant to the idea of being explicitly egoist. I think egoism could cause a lot of problems with moral value-drift. I also think that the issue of excessive punishment due to moral judgement has deeper drivers than whether humanity describes morality as egoist. I think a better solution would probably just to directly emphasize “no one deserves to suffer” attitudes in morality directly.
It’s interesting that you approach it from the “bad effects of treating moral beliefs as X” rather than “moral beliefs are not X”. Are you saying this is true but harmful, or that it’s untrue (or something else, like neither are true and this is a worse equilibrium)?
I do not understand the argument about value drift, when applied to divergent starting values.
As someone leaning moral anti-realist, I learn roughly towards saying that it’s true but harmful. To be more nuanced, I don’t think its necessarily harmful to believe that moral preferences are like ordinary preferences but I do think that treating moral preferences like other preferences is a worse equilibrium.
If we could accept egoism without treating our own morals differently, then I wouldn’t have a problem with it. However, I think that a lot of the bad (more defection) parts of how we judge other people’s morals are intertwined with good (less defection) ways that we treat our own.
Can you elaborate on what you don’t understand about value drift applied to divergent starting values?