Are you saying we should deliberately handicap our estimation of racism, because even people who disagree will go along with it?
I’m saying that the question of “Is Bill a racist?” has structural similarities to “Is Bill a witch?”, both in how the question is pursued and the social consequences of the conclusion, and that tacit support of the witch-hunting apparatus because of the of the social costs of not supporting (rather than because of a genuine dislike for witches) is a group failure mode that could be avoided by conscious acknowledgement of it being a group failure mode. Further, it seems to me that rationalists with an interest in epistemic rationality should make that investment in avoiding that failure mode.
So … you’re saying you’re worried that everyone will overreact to the correct estimate of racism, because they expect everyone else to and don’t want to be excluded? I suspect I still don’t understand, since that doesn’t really sound like an epistemic failure...
Mathematically speaking, not overreacting in estimating the racism of accused people is a weak evidence for being a racist.
Both a moderate non-racist and a moderate racist have a few reasons why we should not organize witch-hunts against people who said something that can be interpreted as racism. However, the moderate racist has one additional reason for not doing that: self-interest; because the next day it could be him.
(In a different context, people who speak about right for fair trial for people accused of terrorism, are suspect of being sympathetic to terrorism. In middle ages people who spoke against killing of heretics were suspect of heresy. Etc.)
The epistemic failure would be to assume that if X is evidence for Y, it must be an overwhelming evidence.
As in: “the only reason why anyone would care about X is because they are Y.” (Common subtrope: “If you are not a criminal, you have nothing to hide from the government.”)
Sure, overreaction would be an epistemic failure—if it were genuine. But the whole point of this idea is that it’s not. It’s faked, based on correctly realizing that not overreacting is dangerous.
That’s not to say it isn’t a failure mode, just not an epistemic one. In any case, I was just curious if I had missed some relevant epistemic failure. Tapping out, unless you think there is such an additional failure and I’m just an idiot.
I’m saying that the question of “Is Bill a racist?” has structural similarities to “Is Bill a witch?”, both in how the question is pursued and the social consequences of the conclusion, and that tacit support of the witch-hunting apparatus because of the of the social costs of not supporting (rather than because of a genuine dislike for witches) is a group failure mode that could be avoided by conscious acknowledgement of it being a group failure mode. Further, it seems to me that rationalists with an interest in epistemic rationality should make that investment in avoiding that failure mode.
So … you’re saying you’re worried that everyone will overreact to the correct estimate of racism, because they expect everyone else to and don’t want to be excluded? I suspect I still don’t understand, since that doesn’t really sound like an epistemic failure...
Mathematically speaking, not overreacting in estimating the racism of accused people is a weak evidence for being a racist.
Both a moderate non-racist and a moderate racist have a few reasons why we should not organize witch-hunts against people who said something that can be interpreted as racism. However, the moderate racist has one additional reason for not doing that: self-interest; because the next day it could be him.
(In a different context, people who speak about right for fair trial for people accused of terrorism, are suspect of being sympathetic to terrorism. In middle ages people who spoke against killing of heretics were suspect of heresy. Etc.)
Exactly. It doesn’t sound like an epistemic failure, because it is, in fact, true.
The epistemic failure would be to assume that if X is evidence for Y, it must be an overwhelming evidence.
As in: “the only reason why anyone would care about X is because they are Y.” (Common subtrope: “If you are not a criminal, you have nothing to hide from the government.”)
Sure, overreaction would be an epistemic failure—if it were genuine. But the whole point of this idea is that it’s not. It’s faked, based on correctly realizing that not overreacting is dangerous.
That’s not to say it isn’t a failure mode, just not an epistemic one. In any case, I was just curious if I had missed some relevant epistemic failure. Tapping out, unless you think there is such an additional failure and I’m just an idiot.