Yeah. I wanted to assume they were being forced to give an opinion, so that “what topics a person is or isn’t likely to bring up” wasn’t a confounding variable. Your point here suggests that a conspirator’s response might be more like “I don’t think about them”, or some kind of null opinion.
This sort of gets to the core of what I was wondering about, but am not sure how to solve: how lies will tend to pervert Bayesian inference. “Simulacra levels” may be relevant here. I would think that a highly competent conspirator would want to only give you information that would reduce your prediction of a conspiracy existing, but this seems sort of recursive, in that anything that would reduce your prediction of a conspiracy would have increased likelihood of being said by a conspirator. Would the effect of lies by bad-faith actors, who know your priors, be that certain statements just don’t update your priors, because uncertainty makes them not actually add any new information? I don’t know what limit this reduces to, and I don’t yet know what math I would need to solve it.
Naturally. I think “backpropagation” might be related to certain observations affecting multiple hypotheses? But I haven’t brushed up on that in a while.
Thank you, it does help! I know some people who revel in conspiracy theories, and some who believe conspiracies are so unlikely, that they dismiss any possibility of a conspiracy out of hand. I get left in the middle with the feeling that some situations “don’t smell right”, without having a provable, quantifiable excuse for why I feel that way.
Yeah. I wanted to assume they were being forced to give an opinion, so that “what topics a person is or isn’t likely to bring up” wasn’t a confounding variable. Your point here suggests that a conspirator’s response might be more like “I don’t think about them”, or some kind of null opinion.
This sort of gets to the core of what I was wondering about, but am not sure how to solve: how lies will tend to pervert Bayesian inference. “Simulacra levels” may be relevant here. I would think that a highly competent conspirator would want to only give you information that would reduce your prediction of a conspiracy existing, but this seems sort of recursive, in that anything that would reduce your prediction of a conspiracy would have increased likelihood of being said by a conspirator. Would the effect of lies by bad-faith actors, who know your priors, be that certain statements just don’t update your priors, because uncertainty makes them not actually add any new information? I don’t know what limit this reduces to, and I don’t yet know what math I would need to solve it.
Naturally. I think “backpropagation” might be related to certain observations affecting multiple hypotheses? But I haven’t brushed up on that in a while.
Thank you, it does help! I know some people who revel in conspiracy theories, and some who believe conspiracies are so unlikely, that they dismiss any possibility of a conspiracy out of hand. I get left in the middle with the feeling that some situations “don’t smell right”, without having a provable, quantifiable excuse for why I feel that way.