But are theories that tend to explode and eat up communal resources therefore less likely to be true? If not, then avoiding them for the sake of preserving communal resources is a systematic distortion on the community’s beliefs.
Expected infrequent discussion of a theory shouldn’t lower estimates of its probability. (Does the intuition that such theories should be seen as less likely follow from most natural theories predicting discussion of themselves? Erroneous theorizing also predicts that, for example “If this statement is correct, it will be the only topic of all future discussions.”)
In general, it shouldn’t be possible to expect well-known systematic distortions for any reason, because they should’ve been recalibrated away immediately. What not discussing a theory should cause is lack of precision (or progress), not systematic distortion.
In fact, a conflict theory is a good explanation for phenomenon X.
However, people only state mistake theories for X, because conflict theories are taboo.
Is your prediction that the participants in the conversation, readers, etc, are not misled by this? Would you predict that, if you gave them a survey afterwards asking for how they would explain X, they in fact give a conflict theory rather than a mistake theory, since they corrected for the distortion due to the conflict theory taboo?
Would you correct your response so? (Should you?) If the target audience tends to act similarly, so would they.
Aside from that, “How do you explain X?” is really ambiguous and anchors on well-understood rather than apt framing. “Does mistake theory explain this case well?” is better, because you may well use a bad theory to think about something while knowing it’s a bad theory for explaining it. If it’s the best you can do, at least this way you have gears to work with. Not having a counterfactually readily available good theory because it’s taboo and wasn’t developed is of course terrible, but it’s not a reason to embrace the bad theory as correct.
Perhaps (75% chance?), in part because I’ve spent >100 hours talking about, reading about, and thinking about good conflict theories. I would have been very likely misled 3 years ago. I was only able to get to this point because enough people around me were willing to break conflict theory taboos.
It is not the case that everybody knows. To get from a state where not everybody knows to a state where everybody knows, it must be possible to talk openly about such things. (I expect the average person on this website to make the correction with <50% probability, even with the alternative framing “Does mistake theory explain this case well?”)
It actually does have to be a lot of discussion. Over-attachment to mistake theory (even when a moderate amount of contrary evidence is presented) is a systematic bias I’ve observed, and it can be explained by factors such as: conformity, social desirability bias (incl. fear), conflict-aversion, desire for a coherent theory that you can talk about with others, getting theories directly from others’ statements, being bad at lying (and at detecting lying), etc. (This is similar to the question (and may even be considered as a special case) of the question of why people are misled by propaganda, even when there is some evidence that the propaganda is propaganda; see Gell-Mann amnesia)
This seems a bit off as Jessica clearly knows about conflict theory. The whole thing about making a particular type of theory taboo is that it can’t become common knowledge.
That’s relevant to the example, but not to the argument. Consider a hypothetical Jessica less interested in conflict theory or a topic other than conflict theory. Also, common knowledge doesn’t seem to play a role here, and “doesn’t know about” is a level of taboo that contradicts the assumption I posited about the argument from selection effect being “well-known”.
In general, it shouldn’t be possible to expect well-known systematic distortions for any reason, because they should’ve been recalibrated away immediately.
Hm. Is “well-known” good enough here, or do you actually need common knowledge? (I expect you to be better than me at working out the math here.) If it’s literally the case that everybody knows that we’re not talking about conflict theories, then I agree that everyone can just take that into account and not be confused. But the function of taboos, silencing tactics, &c. among humans would seem to be maintaining a state where everyone doesn’t know.
Is “well-known” good enough here, or do you actually need common knowledge?
There is no need for coordination or dependence on what others think. If you expect yourself to be miscalibrated, you just fix that. If most people act this way and accept the argument that convinced you, then you expect them to have done the same.
Expected infrequent discussion of a theory shouldn’t lower estimates of its probability. (Does the intuition that such theories should be seen as less likely follow from most natural theories predicting discussion of themselves? Erroneous theorizing also predicts that, for example “If this statement is correct, it will be the only topic of all future discussions.”)
In general, it shouldn’t be possible to expect well-known systematic distortions for any reason, because they should’ve been recalibrated away immediately. What not discussing a theory should cause is lack of precision (or progress), not systematic distortion.
Consider a situation where:
People are discussing phenomenon X.
In fact, a conflict theory is a good explanation for phenomenon X.
However, people only state mistake theories for X, because conflict theories are taboo.
Is your prediction that the participants in the conversation, readers, etc, are not misled by this? Would you predict that, if you gave them a survey afterwards asking for how they would explain X, they in fact give a conflict theory rather than a mistake theory, since they corrected for the distortion due to the conflict theory taboo?
Would you correct your response so? (Should you?) If the target audience tends to act similarly, so would they.
Aside from that, “How do you explain X?” is really ambiguous and anchors on well-understood rather than apt framing. “Does mistake theory explain this case well?” is better, because you may well use a bad theory to think about something while knowing it’s a bad theory for explaining it. If it’s the best you can do, at least this way you have gears to work with. Not having a counterfactually readily available good theory because it’s taboo and wasn’t developed is of course terrible, but it’s not a reason to embrace the bad theory as correct.
Perhaps (75% chance?), in part because I’ve spent >100 hours talking about, reading about, and thinking about good conflict theories. I would have been very likely misled 3 years ago. I was only able to get to this point because enough people around me were willing to break conflict theory taboos.
It is not the case that everybody knows. To get from a state where not everybody knows to a state where everybody knows, it must be possible to talk openly about such things. (I expect the average person on this website to make the correction with <50% probability, even with the alternative framing “Does mistake theory explain this case well?”)
It actually does have to be a lot of discussion. Over-attachment to mistake theory (even when a moderate amount of contrary evidence is presented) is a systematic bias I’ve observed, and it can be explained by factors such as: conformity, social desirability bias (incl. fear), conflict-aversion, desire for a coherent theory that you can talk about with others, getting theories directly from others’ statements, being bad at lying (and at detecting lying), etc. (This is similar to the question (and may even be considered as a special case) of the question of why people are misled by propaganda, even when there is some evidence that the propaganda is propaganda; see Gell-Mann amnesia)
This seems a bit off as Jessica clearly knows about conflict theory. The whole thing about making a particular type of theory taboo is that it can’t become common knowledge.
That’s relevant to the example, but not to the argument. Consider a hypothetical Jessica less interested in conflict theory or a topic other than conflict theory. Also, common knowledge doesn’t seem to play a role here, and “doesn’t know about” is a level of taboo that contradicts the assumption I posited about the argument from selection effect being “well-known”.
Hm. Is “well-known” good enough here, or do you actually need common knowledge? (I expect you to be better than me at working out the math here.) If it’s literally the case that everybody knows that we’re not talking about conflict theories, then I agree that everyone can just take that into account and not be confused. But the function of taboos, silencing tactics, &c. among humans would seem to be maintaining a state where everyone doesn’t know.
There is no need for coordination or dependence on what others think. If you expect yourself to be miscalibrated, you just fix that. If most people act this way and accept the argument that convinced you, then you expect them to have done the same.