I am not assuming they are Bayesians necessarily, but I think it’s fine to take this case too. Let’s suppose that Bob finds that whenever he calls upon Bright for help (in his head, so nobody can observe this), he gets unexpectedly high success rate in whatever he tries. Let’s further suppose that it’s believed that Dark hates kittens (and it’s more important for him than trying to hide his existence), and Daisy is Faerie’s chief veterinarian and is aware of a number of mysterious deaths of kittens that she can’t rationally explain. She is afraid to discuss this with anyone, so it’s private. For numeric probabilities you can take, say, 0.7, for each.
I think your degree of belief in their rationality (and their trustworthiness in terms of not trying to mislead you, and their sanity in terms of having priors at least mildly compatible with yours) should have a very large effect on how much you update based on the evidence that they claim a belief.
The fact that they know of each other and still have wildly divergent beliefs indicates that they don’t trust in each other’s reasoning skills. Why would you give them much more weight than they gave each other?
For this experiment, I don’t want to get involved in the social aspect of this. Suppose they aren’t aware of each other, or it’s very impolite to talk about sorcerers, or whatever. I am curious about their individual minds, and about an outside observer that can observe both (i.e. me).
I am not assuming they are Bayesians necessarily, but I think it’s fine to take this case too. Let’s suppose that Bob finds that whenever he calls upon Bright for help (in his head, so nobody can observe this), he gets unexpectedly high success rate in whatever he tries. Let’s further suppose that it’s believed that Dark hates kittens (and it’s more important for him than trying to hide his existence), and Daisy is Faerie’s chief veterinarian and is aware of a number of mysterious deaths of kittens that she can’t rationally explain. She is afraid to discuss this with anyone, so it’s private. For numeric probabilities you can take, say, 0.7, for each.
I think your degree of belief in their rationality (and their trustworthiness in terms of not trying to mislead you, and their sanity in terms of having priors at least mildly compatible with yours) should have a very large effect on how much you update based on the evidence that they claim a belief.
The fact that they know of each other and still have wildly divergent beliefs indicates that they don’t trust in each other’s reasoning skills. Why would you give them much more weight than they gave each other?
For this experiment, I don’t want to get involved in the social aspect of this. Suppose they aren’t aware of each other, or it’s very impolite to talk about sorcerers, or whatever. I am curious about their individual minds, and about an outside observer that can observe both (i.e. me).