I think you’re right, so I’ll start by assuming that you’re wrong, because I have an alternative explanation for those who disagree with you (and which I think is the most convincing if we assume that the signalling explanation isn’t the correct one). I think Eukaryote is missing one important cause. Assume that most writers arguing for caring more or less about a cause are doing so because they believe that this is an important way to serve that cause. Particularly outside our community, people rarely write about causes just for intellectual entertainment. “Everything is signalling” is a valid response, but I’ll first reply to the rational-actor case, since that informs the limits and types of signalling. If I am writing to people who are “broadly value aligned”, an admittedly imprecise term, I tend to expect that they are not opposed to me on topics that I think are most important. I expect most (85% if I’m optimistic) instances of reading writing to happen when people who are broadly value aligned with the author, at least with respect to the topic of the piece.
If someone cares less about something, I might value that directly (because I dislike the results of them caring), and I might value that indirectly (because I expect effort they take away from the target of my writing to go towards other causes that I value). However, conditional on broad value alignment, the causes that my readers care passionately about are not causes I’m opposed to, and the causes that I care passionately about are not ones that they’re opposed to. So direct benefit, except in writing that is explicitly trying to convince people “from the other side”, will rarely motivate me to try to make people care less.
Most communities have more than 3-4 potential cause areas. One specific friend of mine will physically go to events to support gun control, gender equality, fighting racism, homelessness prevention, Palestine, Pride, her church, abortion rights, and other topics. If I make her be less confident that gun control is an effective way of reducing violence, her efforts will be split fairly broadly. It is unlikely that whatever topic I find most important, or even whatever bundle of topics I find most important, is going to receive much marginal support. EAs are relatively unusual in that deactivating someone along one cause area has a high expected affect on specific other cause areas.
I think you’re right, so I’ll start by assuming that you’re wrong, because I have an alternative explanation for those who disagree with you (and which I think is the most convincing if we assume that the signalling explanation isn’t the correct one). I think Eukaryote is missing one important cause. Assume that most writers arguing for caring more or less about a cause are doing so because they believe that this is an important way to serve that cause. Particularly outside our community, people rarely write about causes just for intellectual entertainment. “Everything is signalling” is a valid response, but I’ll first reply to the rational-actor case, since that informs the limits and types of signalling. If I am writing to people who are “broadly value aligned”, an admittedly imprecise term, I tend to expect that they are not opposed to me on topics that I think are most important. I expect most (85% if I’m optimistic) instances of reading writing to happen when people who are broadly value aligned with the author, at least with respect to the topic of the piece.
If someone cares less about something, I might value that directly (because I dislike the results of them caring), and I might value that indirectly (because I expect effort they take away from the target of my writing to go towards other causes that I value). However, conditional on broad value alignment, the causes that my readers care passionately about are not causes I’m opposed to, and the causes that I care passionately about are not ones that they’re opposed to. So direct benefit, except in writing that is explicitly trying to convince people “from the other side”, will rarely motivate me to try to make people care less.
Most communities have more than 3-4 potential cause areas. One specific friend of mine will physically go to events to support gun control, gender equality, fighting racism, homelessness prevention, Palestine, Pride, her church, abortion rights, and other topics. If I make her be less confident that gun control is an effective way of reducing violence, her efforts will be split fairly broadly. It is unlikely that whatever topic I find most important, or even whatever bundle of topics I find most important, is going to receive much marginal support. EAs are relatively unusual in that deactivating someone along one cause area has a high expected affect on specific other cause areas.