OK, I see what you’re doing now. It’s an interesting model, though one feature jumps out at me now:
In other words: My enemy’s belief P is evidence against P.
Although this phenomenon is a well-known fallacy among human beings, it doesn’t seem like it should be the rational behavior— and then I noticed that the probabilities p_i can be less than 1⁄2 in your model, and that some of your agents are in fact reliably anti-correct. This seems like a probable cause of a binary group split, if I’m understanding correctly.
What’s the result if you make the probabilities (and accordingly, people’s estimates of the probabilities) range from 1⁄2 to 1 instead of from 0 to 1?
What’s the result if you make the probabilities (and accordingly, people’s estimates of the probabilities) range from 1⁄2 to 1 instead of from 0 to 1?
Then everybody converges onto agreeing on the correct answer for every question. And you just answered the question as to why Bayesians should agree to agree: Because Bayesians can’t perform worse than random on average, their accuracies range from 1⁄2 to 1, and are not biased on any problem (unless the evidence is biased, in which case you’re screwed anyway). Averaging their opinions together will thus get the right answer to every (answerable) question. Congratulations! You win 1 Internet!
(The reason for choosing 0 to 1 is explained in the post.)
Although this phenomenon is a well-known fallacy among human beings, it doesn’t seem like it should be the rational behavior
The behavior in my model is rational if the results indicate that it gets the right answer. So far, it looks look it doesn’t.
some of your agents are in fact reliably anti-correct. This seems like a probable cause of a binary group split, if I’m understanding correctly.
You could probably get the same answer by having some problems, rather than agents, usually be answered wrong. An abundance of wrong answers makes the agents split. The agents don’t split into the correct agents and the incorrect agents, at least not for the conditions I’ve tested. There doubtless are settings that would get them to do that.
OK, I see what you’re doing now. It’s an interesting model, though one feature jumps out at me now:
Although this phenomenon is a well-known fallacy among human beings, it doesn’t seem like it should be the rational behavior— and then I noticed that the probabilities p_i can be less than 1⁄2 in your model, and that some of your agents are in fact reliably anti-correct. This seems like a probable cause of a binary group split, if I’m understanding correctly.
What’s the result if you make the probabilities (and accordingly, people’s estimates of the probabilities) range from 1⁄2 to 1 instead of from 0 to 1?
Then everybody converges onto agreeing on the correct answer for every question. And you just answered the question as to why Bayesians should agree to agree: Because Bayesians can’t perform worse than random on average, their accuracies range from 1⁄2 to 1, and are not biased on any problem (unless the evidence is biased, in which case you’re screwed anyway). Averaging their opinions together will thus get the right answer to every (answerable) question. Congratulations! You win 1 Internet!
(The reason for choosing 0 to 1 is explained in the post.)
The behavior in my model is rational if the results indicate that it gets the right answer. So far, it looks look it doesn’t.
You could probably get the same answer by having some problems, rather than agents, usually be answered wrong. An abundance of wrong answers makes the agents split. The agents don’t split into the correct agents and the incorrect agents, at least not for the conditions I’ve tested. There doubtless are settings that would get them to do that.