I agree that, among ethicists, being of one school or another probably isn’t predictive of engaging more or less in “one thought too many.” Ethicists are generally not moral paragons in that department. Overthinking ethical stuff is kind of their job though – maybe be thankful you don’t have to do it?
That said, I do find that (at least in writing) virtue ethicists do a better job of highlighting this as something to avoid: they are better moral guides in this respect. I also think that they tend to muster a more coherent theoretical response to the problem of self-effacement: they more or less embrace it, while consequentialists try to dance around it.
It sounds like you’re arguing not so much for everybody doing less moral calculation, and more for delegating our moral calculus to experts.
I think we meet even stronger limitations to moral deference than we do for epistemic deference: experts disagree, people pose as experts when they aren’t, people ignore expertise where it exists, laypeople pick arguments with each other even when they’d both do better to defer, experts engage in interior moral disharmony, etc. When you can do it, I agree that deference is an attractive choice, as I feel I am able to do in the case of several EA institutions.
I strongly dislike characterizations of consequentialism as “dancing around” various abstract things. It is a strange dance floor populated with strange abstractions and I think it behooves critics to say exactly what they mean, so that consequentialists can make specific objections to those criticisms. Alternatively, we consequentialists can volley the same critiques back at the virtue ethicists: the Catholic church seems to do plenty of dancing around its own seedy history of global-scale consquest, theft, and abuse, while asking for unlimited deference to a moral hierarchy it claims is not only wise, but infallible. I don’t want to be a cold-hearted calculator, but I also don’t want to defer to, say, a church with a recent history of playing the ultimate pedopheliac shell game. If I have to accept a little extra dancing to vet my experts and fill in where ready expertise is lacking, I am happy for the exercise.
Regarding moral deference: I agree that moral deference as it currently stands is highly unreliable. But even if it were, I actually don’t think a world in which agents did a lot of moral deference would be ideal. The virtuous agent doesn’t tell their friend “I deferred to the moral experts and they told me I should come see you.”
I do emphasize the importance of having good moral authorities/exemplars help shape your character, especially when we’re young and impressionable. That’s not something we have much control over – when we’re older, we can somewhat control who we hang around and who we look up to, but that’s about it. This does emphasize the importance of being a good role model for those around us who are impressionable though!
I’m not sure if you would call it deference, but I also emphasize (following Martha Nussbaum and Susan Feagin) that engaging with good books, plays, movies, etc. is critical for practicing moral perception, with all the appropriate affect, in a safe environment. And indeed, it was a book (Marmontel’s Mimoires) that helped J.S. Mill get out of his internal moral disharmony. If there are any experts here, it’s the creators of these works. And if they have claim to moral expertise it is an appropriately humble folk expertise which, imho, is just about as good as our current state-of-the-art ethicists’ expertise. Where creators successfully minimize any implicit or explicit judgment of their characters/situations, they don’t even offer moral folk expertise so much as give us complex detailed scenarios to grapple with and test our intuitions (I would hold up Lolita as an example of this). That exercise in grappling with the moral details is itself healthy (something no toy “thought experiment” can replace).
Moral reasoning can of course be helpful when trying to become a better person. But it is not the only tool we have, and over-relying on it has harmful side-effects.
Regarding my critique of consequentialism: Something I seem to be failing to do is make clear when I’m talking about theorists who develop and defend a form of Consequentialism and people who have, directly or indirectly, been convinced to operate on consequentialist principles by those theorists. Call the first “consequentialist theorists” and the latter “consequentialist followers.” I’m not saying followers dance around the problem of self-effacement – I don’t even expect many to know what that is. It’s a problem for the theorists. It’s not something that’s going to get resolved in a forum comment thread. I only mentioned it to explain why I was singling out Consequentialism in my post: because I happen to know consequentialist theorists struggle with this more than VE theorists. (As far as I know DE theorists struggle with it to, and I tried to make that clear throughout the post, but I assume most of my readers are consequentialist followers and so don’t really care). I also mentioned it because I think it’s important for people to remember their “camp” is far from theoretically airtight.
Ultimately I encourage all of us to be pluralists about ethics – I am extremely skeptical that any one theorist has gotten it all correct. And even if they did, we wouldn’t be able to tell with any certainty they did. At the moment, all we can do is try and heed the various lessons from the various camps/theorists. All I was just trying to do was pass on a lesson one hears quite loudly in the VE camp and that I suspect many in the Consequentialism camp haven’t heard very often or paid much attention to.
It sounds like what you really care about is promoting the experience of empathy and fellow-feeling. You don’t particularly care about moral calculation or deference, except insofar as they interfere or make room for with this psychological state.
I understand the idea that moral deference can make room for positive affect, and what I remain skeptical of is the idea that moral calculation mostly interferes with fellow-feeling. It’s a hypothesis one could test, but it needs data.
Sorry, my first reply to your comment wasn’t very on point. Yes, you’re getting at one of the central claims of my post.
what I remain skeptical of is the idea that moral calculation mostly interferes with fellow-feeling
First, I wouldn’t say “mostly.” I think in excessive amounts it interferes. Regarding your skepticism: we already know that calculation (a maximizer’s mindset) in other contexts interferes with affective attachment and positive evaluations towards the choices made by said calculation (see references to psych litt). Why shouldn’t we expect the same thing to occur in moral situations, with the relevant “moral” affects? (In fact, depending on what you count as “moral,” the research already provides evidence of this).
If your skepticism is about the sheer possibility of calculation interfering with empathy/fellow-feeling etc, then any anecdotal evidence should do. See e.g. Mill’s autobiography. But also, you’ve never ever been in a situation where you were conflicted between doing two different things with two different people/groups, and too much back and forth made you kinda feel numb to both options in the end, just shrugging and saying “whatever, I don’t care anymore, either one”? That would be an example of calculation interfering with fellow-feeling.
Some amount of this is normal and unavoidable. But one can make it worse. Whether the LW/EA community does so or not is the question in need of data – we can agree on that! See my comment below for more details.
First, I wouldn’t say “mostly.” I think in excessive amounts it interferes.
We’ve all sat around with thoughts whirling around in our heads, perseverating about ethics. Sometimes, a little ethical thinking helps us make a big decision. Other times, it’s not much different from having an annoying song stuck in your head. When we’re itchy, have the sun in our eyes, or, yes, can’t stop thinking about ethics, that discomfort shows in our face, in our bearing, and in our voice, and it makes it harder to connect with other people.
You and I both see that, just like a great song can still be incredibly annoying when it’s stuck in your head, a great ethical system can likewise give us a terrible headache when we can’t stop perseverating about it.
So, for a person who streams a lot of consequentialism on their moral Spotify, it seems like you’re telling them that if they’d just start listening to some nice virtue ethics instead and give up that nasty noise, they’d find themselves in a much more pleasant state of mind after a while. Personally, as a person who’s been conversant with all three ethical systems, interfaces with many moral communities, and has fielded a lot of complex ethical conversations with a lot of people, I don’t really see any more basis for thinking consequentialism is unusually bad as a “moral earworm,” any more than (as a musician) I think that any particular genre of music is more prone to distressing earworms.
To me, perseveration/earworms feels more like a disorder of the audio loop, in which it latches on to thoughts, words, or sounds and cycles from one to another in a way you just can’t control. It doesn’t feel particularly governed by the content of those thoughts. Even if it is enhanced by specific types of mental content, it seems like it would require psychological methodologies that do not actually exist in order to reliably detect an effect of that kind. We’d have to see the thoughts in people’s heads, find out how often they perseverate, and try and detect a causal association. I think it’s unlikely that convincing evidence exists in the literature, and I find it dubious that we could achieve confidence in our beliefs in this matter without such a careful scientific study.
I claim that one’s level of engagement with the LW/EA rationalist community can weakly predict the degree to which one adopts a maximizer’s mindset when confronted with moral/normative scenarios in life, the degree to which one suffers cognitive dissonance in such scenarios, and the degree to which one expresses positive affective attachment to one’s decision (or the object at the center of their decision) in such scenarios.
More specifically I predict that, above a certain threshold of engagement with the community, increased engagement with the LW/EA community correlates with an increase in the maximizer’s mindset, increase in cognitive dissonance, and decrease in positive affective attachment in the aforementioned scenarios.
On net, I have no doubt the LW/EA community is having a positive impact on people’s moral character. That does not mean there can’t exist harmful side-effects the LW/EA community produces, identifiable as weak trends among community goers that are not present among other groups. Where such side-effects exist shouldn’t they be curbed?
I agree that, among ethicists, being of one school or another probably isn’t predictive of engaging more or less in “one thought too many.” Ethicists are generally not moral paragons in that department. Overthinking ethical stuff is kind of their job though – maybe be thankful you don’t have to do it?
That said, I do find that (at least in writing) virtue ethicists do a better job of highlighting this as something to avoid: they are better moral guides in this respect. I also think that they tend to muster a more coherent theoretical response to the problem of self-effacement: they more or less embrace it, while consequentialists try to dance around it.
It sounds like you’re arguing not so much for everybody doing less moral calculation, and more for delegating our moral calculus to experts.
I think we meet even stronger limitations to moral deference than we do for epistemic deference: experts disagree, people pose as experts when they aren’t, people ignore expertise where it exists, laypeople pick arguments with each other even when they’d both do better to defer, experts engage in interior moral disharmony, etc. When you can do it, I agree that deference is an attractive choice, as I feel I am able to do in the case of several EA institutions.
I strongly dislike characterizations of consequentialism as “dancing around” various abstract things. It is a strange dance floor populated with strange abstractions and I think it behooves critics to say exactly what they mean, so that consequentialists can make specific objections to those criticisms. Alternatively, we consequentialists can volley the same critiques back at the virtue ethicists: the Catholic church seems to do plenty of dancing around its own seedy history of global-scale consquest, theft, and abuse, while asking for unlimited deference to a moral hierarchy it claims is not only wise, but infallible. I don’t want to be a cold-hearted calculator, but I also don’t want to defer to, say, a church with a recent history of playing the ultimate pedopheliac shell game. If I have to accept a little extra dancing to vet my experts and fill in where ready expertise is lacking, I am happy for the exercise.
Regarding moral deference:
I agree that moral deference as it currently stands is highly unreliable. But even if it were, I actually don’t think a world in which agents did a lot of moral deference would be ideal. The virtuous agent doesn’t tell their friend “I deferred to the moral experts and they told me I should come see you.”
I do emphasize the importance of having good moral authorities/exemplars help shape your character, especially when we’re young and impressionable. That’s not something we have much control over – when we’re older, we can somewhat control who we hang around and who we look up to, but that’s about it. This does emphasize the importance of being a good role model for those around us who are impressionable though!
I’m not sure if you would call it deference, but I also emphasize (following Martha Nussbaum and Susan Feagin) that engaging with good books, plays, movies, etc. is critical for practicing moral perception, with all the appropriate affect, in a safe environment. And indeed, it was a book (Marmontel’s Mimoires) that helped J.S. Mill get out of his internal moral disharmony. If there are any experts here, it’s the creators of these works. And if they have claim to moral expertise it is an appropriately humble folk expertise which, imho, is just about as good as our current state-of-the-art ethicists’ expertise. Where creators successfully minimize any implicit or explicit judgment of their characters/situations, they don’t even offer moral folk expertise so much as give us complex detailed scenarios to grapple with and test our intuitions (I would hold up Lolita as an example of this). That exercise in grappling with the moral details is itself healthy (something no toy “thought experiment” can replace).
Moral reasoning can of course be helpful when trying to become a better person. But it is not the only tool we have, and over-relying on it has harmful side-effects.
Regarding my critique of consequentialism:
Something I seem to be failing to do is make clear when I’m talking about theorists who develop and defend a form of Consequentialism and people who have, directly or indirectly, been convinced to operate on consequentialist principles by those theorists. Call the first “consequentialist theorists” and the latter “consequentialist followers.” I’m not saying followers dance around the problem of self-effacement – I don’t even expect many to know what that is. It’s a problem for the theorists. It’s not something that’s going to get resolved in a forum comment thread. I only mentioned it to explain why I was singling out Consequentialism in my post: because I happen to know consequentialist theorists struggle with this more than VE theorists. (As far as I know DE theorists struggle with it to, and I tried to make that clear throughout the post, but I assume most of my readers are consequentialist followers and so don’t really care). I also mentioned it because I think it’s important for people to remember their “camp” is far from theoretically airtight.
Ultimately I encourage all of us to be pluralists about ethics – I am extremely skeptical that any one theorist has gotten it all correct. And even if they did, we wouldn’t be able to tell with any certainty they did. At the moment, all we can do is try and heed the various lessons from the various camps/theorists. All I was just trying to do was pass on a lesson one hears quite loudly in the VE camp and that I suspect many in the Consequentialism camp haven’t heard very often or paid much attention to.
It sounds like what you really care about is promoting the experience of empathy and fellow-feeling. You don’t particularly care about moral calculation or deference, except insofar as they interfere or make room for with this psychological state.
I understand the idea that moral deference can make room for positive affect, and what I remain skeptical of is the idea that moral calculation mostly interferes with fellow-feeling. It’s a hypothesis one could test, but it needs data.
Sorry, my first reply to your comment wasn’t very on point. Yes, you’re getting at one of the central claims of my post.
First, I wouldn’t say “mostly.” I think in excessive amounts it interferes. Regarding your skepticism: we already know that calculation (a maximizer’s mindset) in other contexts interferes with affective attachment and positive evaluations towards the choices made by said calculation (see references to psych litt). Why shouldn’t we expect the same thing to occur in moral situations, with the relevant “moral” affects? (In fact, depending on what you count as “moral,” the research already provides evidence of this).
If your skepticism is about the sheer possibility of calculation interfering with empathy/fellow-feeling etc, then any anecdotal evidence should do. See e.g. Mill’s autobiography. But also, you’ve never ever been in a situation where you were conflicted between doing two different things with two different people/groups, and too much back and forth made you kinda feel numb to both options in the end, just shrugging and saying “whatever, I don’t care anymore, either one”? That would be an example of calculation interfering with fellow-feeling.
Some amount of this is normal and unavoidable. But one can make it worse. Whether the LW/EA community does so or not is the question in need of data – we can agree on that! See my comment below for more details.
We’ve all sat around with thoughts whirling around in our heads, perseverating about ethics. Sometimes, a little ethical thinking helps us make a big decision. Other times, it’s not much different from having an annoying song stuck in your head. When we’re itchy, have the sun in our eyes, or, yes, can’t stop thinking about ethics, that discomfort shows in our face, in our bearing, and in our voice, and it makes it harder to connect with other people.
You and I both see that, just like a great song can still be incredibly annoying when it’s stuck in your head, a great ethical system can likewise give us a terrible headache when we can’t stop perseverating about it.
So, for a person who streams a lot of consequentialism on their moral Spotify, it seems like you’re telling them that if they’d just start listening to some nice virtue ethics instead and give up that nasty noise, they’d find themselves in a much more pleasant state of mind after a while. Personally, as a person who’s been conversant with all three ethical systems, interfaces with many moral communities, and has fielded a lot of complex ethical conversations with a lot of people, I don’t really see any more basis for thinking consequentialism is unusually bad as a “moral earworm,” any more than (as a musician) I think that any particular genre of music is more prone to distressing earworms.
To me, perseveration/earworms feels more like a disorder of the audio loop, in which it latches on to thoughts, words, or sounds and cycles from one to another in a way you just can’t control. It doesn’t feel particularly governed by the content of those thoughts. Even if it is enhanced by specific types of mental content, it seems like it would require psychological methodologies that do not actually exist in order to reliably detect an effect of that kind. We’d have to see the thoughts in people’s heads, find out how often they perseverate, and try and detect a causal association. I think it’s unlikely that convincing evidence exists in the literature, and I find it dubious that we could achieve confidence in our beliefs in this matter without such a careful scientific study.
Here is my prediction:
The hypothesis for why that correlation will be there is mostly in this section and at the end of this section.
On net, I have no doubt the LW/EA community is having a positive impact on people’s moral character. That does not mean there can’t exist harmful side-effects the LW/EA community produces, identifiable as weak trends among community goers that are not present among other groups. Where such side-effects exist shouldn’t they be curbed?