If they’re actually in a conspiracy against you, it’s likely that they don’t even want you thinking about conspiracies. It’s not in their interest for you to associate them with the concept “conspiracy” in any way, since people who don’t think about conspiracies at all are unlikely to unmask them. By this reasoning, the chance of a conspirator drawing attention to thinking about conspiracies is not anywhere near 95% - maybe not even 20%.
A highly competent conspiracy member will give you no information that distinguishes the existence of the conspiracy from the non-existence of the conspiracy. If you believe that they have voluntarily given you such information, then you should rule out that the conspiracy consists of competent members. This takes a chunk out of your “this person is a conspirator” weight.
There are always more hypotheses. Splitting into just two and treating them as internally homogeneous is always a mistake.
I hope this helps! Thinking about conspiracies doesn’t have to be bad for your epistemology, but I suspect that in practice it is much more often harmful than helpful.
Yeah. I wanted to assume they were being forced to give an opinion, so that “what topics a person is or isn’t likely to bring up” wasn’t a confounding variable. Your point here suggests that a conspirator’s response might be more like “I don’t think about them”, or some kind of null opinion.
This sort of gets to the core of what I was wondering about, but am not sure how to solve: how lies will tend to pervert Bayesian inference. “Simulacra levels” may be relevant here. I would think that a highly competent conspirator would want to only give you information that would reduce your prediction of a conspiracy existing, but this seems sort of recursive, in that anything that would reduce your prediction of a conspiracy would have increased likelihood of being said by a conspirator. Would the effect of lies by bad-faith actors, who know your priors, be that certain statements just don’t update your priors, because uncertainty makes them not actually add any new information? I don’t know what limit this reduces to, and I don’t yet know what math I would need to solve it.
Naturally. I think “backpropagation” might be related to certain observations affecting multiple hypotheses? But I haven’t brushed up on that in a while.
Thank you, it does help! I know some people who revel in conspiracy theories, and some who believe conspiracies are so unlikely, that they dismiss any possibility of a conspiracy out of hand. I get left in the middle with the feeling that some situations “don’t smell right”, without having a provable, quantifiable excuse for why I feel that way.
There are lots of holes. Here are a few:
If they’re actually in a conspiracy against you, it’s likely that they don’t even want you thinking about conspiracies. It’s not in their interest for you to associate them with the concept “conspiracy” in any way, since people who don’t think about conspiracies at all are unlikely to unmask them. By this reasoning, the chance of a conspirator drawing attention to thinking about conspiracies is not anywhere near 95% - maybe not even 20%.
A highly competent conspiracy member will give you no information that distinguishes the existence of the conspiracy from the non-existence of the conspiracy. If you believe that they have voluntarily given you such information, then you should rule out that the conspiracy consists of competent members. This takes a chunk out of your “this person is a conspirator” weight.
There are always more hypotheses. Splitting into just two and treating them as internally homogeneous is always a mistake.
I hope this helps! Thinking about conspiracies doesn’t have to be bad for your epistemology, but I suspect that in practice it is much more often harmful than helpful.
Yeah. I wanted to assume they were being forced to give an opinion, so that “what topics a person is or isn’t likely to bring up” wasn’t a confounding variable. Your point here suggests that a conspirator’s response might be more like “I don’t think about them”, or some kind of null opinion.
This sort of gets to the core of what I was wondering about, but am not sure how to solve: how lies will tend to pervert Bayesian inference. “Simulacra levels” may be relevant here. I would think that a highly competent conspirator would want to only give you information that would reduce your prediction of a conspiracy existing, but this seems sort of recursive, in that anything that would reduce your prediction of a conspiracy would have increased likelihood of being said by a conspirator. Would the effect of lies by bad-faith actors, who know your priors, be that certain statements just don’t update your priors, because uncertainty makes them not actually add any new information? I don’t know what limit this reduces to, and I don’t yet know what math I would need to solve it.
Naturally. I think “backpropagation” might be related to certain observations affecting multiple hypotheses? But I haven’t brushed up on that in a while.
Thank you, it does help! I know some people who revel in conspiracy theories, and some who believe conspiracies are so unlikely, that they dismiss any possibility of a conspiracy out of hand. I get left in the middle with the feeling that some situations “don’t smell right”, without having a provable, quantifiable excuse for why I feel that way.