(Relative likelihood notation is easier, so we will use that)
I heard a thing. Well, I more heard a thing about another thing. Before I heard about it, I didn’t know one way or the other at all. My prior was the Bayesian null prior of 1:1. Let’s say the thing I heard is “Conspiracy thinking is bad for my epistemology”. Let’s pretend it was relevant at the time, and didn’t just come up out of nowhere. What is the chance that someone would hold this opinion, given that they are not part of any conspiracy against me? Maybe 50%? If I heard it in a Rationality influenced space, probably more like 80%? Now, what is the chance that someone would share this as their opinion, given that they are involved in a conspiracy against me? Somewhere between 95% and 100%, so let’s say 99%? Now, our prior is 1:1, and our likelihood ratio is 80:99, so our final prediction, of someone not being a conspirator vs being a conspirator, is 80:99, or 1:1.24. Therefore, my expected probability of someone not being a conspirator went from 50%, down to 45%. Huh.
For the love of all that is good, please shoot holes in this and tell me I screwed up somewhere.
If they’re actually in a conspiracy against you, it’s likely that they don’t even want you thinking about conspiracies. It’s not in their interest for you to associate them with the concept “conspiracy” in any way, since people who don’t think about conspiracies at all are unlikely to unmask them. By this reasoning, the chance of a conspirator drawing attention to thinking about conspiracies is not anywhere near 95% - maybe not even 20%.
A highly competent conspiracy member will give you no information that distinguishes the existence of the conspiracy from the non-existence of the conspiracy. If you believe that they have voluntarily given you such information, then you should rule out that the conspiracy consists of competent members. This takes a chunk out of your “this person is a conspirator” weight.
There are always more hypotheses. Splitting into just two and treating them as internally homogeneous is always a mistake.
I hope this helps! Thinking about conspiracies doesn’t have to be bad for your epistemology, but I suspect that in practice it is much more often harmful than helpful.
Yeah. I wanted to assume they were being forced to give an opinion, so that “what topics a person is or isn’t likely to bring up” wasn’t a confounding variable. Your point here suggests that a conspirator’s response might be more like “I don’t think about them”, or some kind of null opinion.
This sort of gets to the core of what I was wondering about, but am not sure how to solve: how lies will tend to pervert Bayesian inference. “Simulacra levels” may be relevant here. I would think that a highly competent conspirator would want to only give you information that would reduce your prediction of a conspiracy existing, but this seems sort of recursive, in that anything that would reduce your prediction of a conspiracy would have increased likelihood of being said by a conspirator. Would the effect of lies by bad-faith actors, who know your priors, be that certain statements just don’t update your priors, because uncertainty makes them not actually add any new information? I don’t know what limit this reduces to, and I don’t yet know what math I would need to solve it.
Naturally. I think “backpropagation” might be related to certain observations affecting multiple hypotheses? But I haven’t brushed up on that in a while.
Thank you, it does help! I know some people who revel in conspiracy theories, and some who believe conspiracies are so unlikely, that they dismiss any possibility of a conspiracy out of hand. I get left in the middle with the feeling that some situations “don’t smell right”, without having a provable, quantifiable excuse for why I feel that way.
The event is more likely to occur if the person is a conspirator, so you hearing the statement should indeed increase your credence for conspiracy (and symmetrically decrease your credence for not-conspiracy).
How do lies affect Bayesian Inference?
(Relative likelihood notation is easier, so we will use that)
I heard a thing. Well, I more heard a thing about another thing. Before I heard about it, I didn’t know one way or the other at all. My prior was the Bayesian null prior of 1:1. Let’s say the thing I heard is “Conspiracy thinking is bad for my epistemology”. Let’s pretend it was relevant at the time, and didn’t just come up out of nowhere. What is the chance that someone would hold this opinion, given that they are not part of any conspiracy against me? Maybe 50%? If I heard it in a Rationality influenced space, probably more like 80%? Now, what is the chance that someone would share this as their opinion, given that they are involved in a conspiracy against me? Somewhere between 95% and 100%, so let’s say 99%? Now, our prior is 1:1, and our likelihood ratio is 80:99, so our final prediction, of someone not being a conspirator vs being a conspirator, is 80:99, or 1:1.24. Therefore, my expected probability of someone not being a conspirator went from 50%, down to 45%. Huh.
For the love of all that is good, please shoot holes in this and tell me I screwed up somewhere.
There are lots of holes. Here are a few:
If they’re actually in a conspiracy against you, it’s likely that they don’t even want you thinking about conspiracies. It’s not in their interest for you to associate them with the concept “conspiracy” in any way, since people who don’t think about conspiracies at all are unlikely to unmask them. By this reasoning, the chance of a conspirator drawing attention to thinking about conspiracies is not anywhere near 95% - maybe not even 20%.
A highly competent conspiracy member will give you no information that distinguishes the existence of the conspiracy from the non-existence of the conspiracy. If you believe that they have voluntarily given you such information, then you should rule out that the conspiracy consists of competent members. This takes a chunk out of your “this person is a conspirator” weight.
There are always more hypotheses. Splitting into just two and treating them as internally homogeneous is always a mistake.
I hope this helps! Thinking about conspiracies doesn’t have to be bad for your epistemology, but I suspect that in practice it is much more often harmful than helpful.
Yeah. I wanted to assume they were being forced to give an opinion, so that “what topics a person is or isn’t likely to bring up” wasn’t a confounding variable. Your point here suggests that a conspirator’s response might be more like “I don’t think about them”, or some kind of null opinion.
This sort of gets to the core of what I was wondering about, but am not sure how to solve: how lies will tend to pervert Bayesian inference. “Simulacra levels” may be relevant here. I would think that a highly competent conspirator would want to only give you information that would reduce your prediction of a conspiracy existing, but this seems sort of recursive, in that anything that would reduce your prediction of a conspiracy would have increased likelihood of being said by a conspirator. Would the effect of lies by bad-faith actors, who know your priors, be that certain statements just don’t update your priors, because uncertainty makes them not actually add any new information? I don’t know what limit this reduces to, and I don’t yet know what math I would need to solve it.
Naturally. I think “backpropagation” might be related to certain observations affecting multiple hypotheses? But I haven’t brushed up on that in a while.
Thank you, it does help! I know some people who revel in conspiracy theories, and some who believe conspiracies are so unlikely, that they dismiss any possibility of a conspiracy out of hand. I get left in the middle with the feeling that some situations “don’t smell right”, without having a provable, quantifiable excuse for why I feel that way.
The event is more likely to occur if the person is a conspirator, so you hearing the statement should indeed increase your credence for conspiracy (and symmetrically decrease your credence for not-conspiracy).