It sounds like what you really care about is promoting the experience of empathy and fellow-feeling. You don’t particularly care about moral calculation or deference, except insofar as they interfere or make room for with this psychological state.
I understand the idea that moral deference can make room for positive affect, and what I remain skeptical of is the idea that moral calculation mostly interferes with fellow-feeling. It’s a hypothesis one could test, but it needs data.
Sorry, my first reply to your comment wasn’t very on point. Yes, you’re getting at one of the central claims of my post.
what I remain skeptical of is the idea that moral calculation mostly interferes with fellow-feeling
First, I wouldn’t say “mostly.” I think in excessive amounts it interferes. Regarding your skepticism: we already know that calculation (a maximizer’s mindset) in other contexts interferes with affective attachment and positive evaluations towards the choices made by said calculation (see references to psych litt). Why shouldn’t we expect the same thing to occur in moral situations, with the relevant “moral” affects? (In fact, depending on what you count as “moral,” the research already provides evidence of this).
If your skepticism is about the sheer possibility of calculation interfering with empathy/fellow-feeling etc, then any anecdotal evidence should do. See e.g. Mill’s autobiography. But also, you’ve never ever been in a situation where you were conflicted between doing two different things with two different people/groups, and too much back and forth made you kinda feel numb to both options in the end, just shrugging and saying “whatever, I don’t care anymore, either one”? That would be an example of calculation interfering with fellow-feeling.
Some amount of this is normal and unavoidable. But one can make it worse. Whether the LW/EA community does so or not is the question in need of data – we can agree on that! See my comment below for more details.
First, I wouldn’t say “mostly.” I think in excessive amounts it interferes.
We’ve all sat around with thoughts whirling around in our heads, perseverating about ethics. Sometimes, a little ethical thinking helps us make a big decision. Other times, it’s not much different from having an annoying song stuck in your head. When we’re itchy, have the sun in our eyes, or, yes, can’t stop thinking about ethics, that discomfort shows in our face, in our bearing, and in our voice, and it makes it harder to connect with other people.
You and I both see that, just like a great song can still be incredibly annoying when it’s stuck in your head, a great ethical system can likewise give us a terrible headache when we can’t stop perseverating about it.
So, for a person who streams a lot of consequentialism on their moral Spotify, it seems like you’re telling them that if they’d just start listening to some nice virtue ethics instead and give up that nasty noise, they’d find themselves in a much more pleasant state of mind after a while. Personally, as a person who’s been conversant with all three ethical systems, interfaces with many moral communities, and has fielded a lot of complex ethical conversations with a lot of people, I don’t really see any more basis for thinking consequentialism is unusually bad as a “moral earworm,” any more than (as a musician) I think that any particular genre of music is more prone to distressing earworms.
To me, perseveration/earworms feels more like a disorder of the audio loop, in which it latches on to thoughts, words, or sounds and cycles from one to another in a way you just can’t control. It doesn’t feel particularly governed by the content of those thoughts. Even if it is enhanced by specific types of mental content, it seems like it would require psychological methodologies that do not actually exist in order to reliably detect an effect of that kind. We’d have to see the thoughts in people’s heads, find out how often they perseverate, and try and detect a causal association. I think it’s unlikely that convincing evidence exists in the literature, and I find it dubious that we could achieve confidence in our beliefs in this matter without such a careful scientific study.
I claim that one’s level of engagement with the LW/EA rationalist community can weakly predict the degree to which one adopts a maximizer’s mindset when confronted with moral/normative scenarios in life, the degree to which one suffers cognitive dissonance in such scenarios, and the degree to which one expresses positive affective attachment to one’s decision (or the object at the center of their decision) in such scenarios.
More specifically I predict that, above a certain threshold of engagement with the community, increased engagement with the LW/EA community correlates with an increase in the maximizer’s mindset, increase in cognitive dissonance, and decrease in positive affective attachment in the aforementioned scenarios.
On net, I have no doubt the LW/EA community is having a positive impact on people’s moral character. That does not mean there can’t exist harmful side-effects the LW/EA community produces, identifiable as weak trends among community goers that are not present among other groups. Where such side-effects exist shouldn’t they be curbed?
It sounds like what you really care about is promoting the experience of empathy and fellow-feeling. You don’t particularly care about moral calculation or deference, except insofar as they interfere or make room for with this psychological state.
I understand the idea that moral deference can make room for positive affect, and what I remain skeptical of is the idea that moral calculation mostly interferes with fellow-feeling. It’s a hypothesis one could test, but it needs data.
Sorry, my first reply to your comment wasn’t very on point. Yes, you’re getting at one of the central claims of my post.
First, I wouldn’t say “mostly.” I think in excessive amounts it interferes. Regarding your skepticism: we already know that calculation (a maximizer’s mindset) in other contexts interferes with affective attachment and positive evaluations towards the choices made by said calculation (see references to psych litt). Why shouldn’t we expect the same thing to occur in moral situations, with the relevant “moral” affects? (In fact, depending on what you count as “moral,” the research already provides evidence of this).
If your skepticism is about the sheer possibility of calculation interfering with empathy/fellow-feeling etc, then any anecdotal evidence should do. See e.g. Mill’s autobiography. But also, you’ve never ever been in a situation where you were conflicted between doing two different things with two different people/groups, and too much back and forth made you kinda feel numb to both options in the end, just shrugging and saying “whatever, I don’t care anymore, either one”? That would be an example of calculation interfering with fellow-feeling.
Some amount of this is normal and unavoidable. But one can make it worse. Whether the LW/EA community does so or not is the question in need of data – we can agree on that! See my comment below for more details.
We’ve all sat around with thoughts whirling around in our heads, perseverating about ethics. Sometimes, a little ethical thinking helps us make a big decision. Other times, it’s not much different from having an annoying song stuck in your head. When we’re itchy, have the sun in our eyes, or, yes, can’t stop thinking about ethics, that discomfort shows in our face, in our bearing, and in our voice, and it makes it harder to connect with other people.
You and I both see that, just like a great song can still be incredibly annoying when it’s stuck in your head, a great ethical system can likewise give us a terrible headache when we can’t stop perseverating about it.
So, for a person who streams a lot of consequentialism on their moral Spotify, it seems like you’re telling them that if they’d just start listening to some nice virtue ethics instead and give up that nasty noise, they’d find themselves in a much more pleasant state of mind after a while. Personally, as a person who’s been conversant with all three ethical systems, interfaces with many moral communities, and has fielded a lot of complex ethical conversations with a lot of people, I don’t really see any more basis for thinking consequentialism is unusually bad as a “moral earworm,” any more than (as a musician) I think that any particular genre of music is more prone to distressing earworms.
To me, perseveration/earworms feels more like a disorder of the audio loop, in which it latches on to thoughts, words, or sounds and cycles from one to another in a way you just can’t control. It doesn’t feel particularly governed by the content of those thoughts. Even if it is enhanced by specific types of mental content, it seems like it would require psychological methodologies that do not actually exist in order to reliably detect an effect of that kind. We’d have to see the thoughts in people’s heads, find out how often they perseverate, and try and detect a causal association. I think it’s unlikely that convincing evidence exists in the literature, and I find it dubious that we could achieve confidence in our beliefs in this matter without such a careful scientific study.
Here is my prediction:
The hypothesis for why that correlation will be there is mostly in this section and at the end of this section.
On net, I have no doubt the LW/EA community is having a positive impact on people’s moral character. That does not mean there can’t exist harmful side-effects the LW/EA community produces, identifiable as weak trends among community goers that are not present among other groups. Where such side-effects exist shouldn’t they be curbed?