I can’t think of a single disagreement here to which the answer has been revealed, either. But—spoiler alert—having the answers to numerous problems revealed to at least some of the agents is the only factor I’ve found that can get the simulated agents to improve their beliefs.
It’s difficult to apply the simulation results to people, who can, in theory, be convinced of something by following a logical argument. The reasons why I think we can model that with a simple per-person accuracy level might need a post of their own.
having the answers to numerous problems revealed to at least some of the agents is the only factor I’ve found that can get the simulated agents to improve their beliefs.
Oops—that statement was based on a bug in my program.
The usual situation does involve agents changing their answers as time passes differentially towards “true”—your model is extremely simplified, but [edit: may be] accurate enough for the purpose.
I can’t think of a single disagreement here to which the answer has been revealed, either. But—spoiler alert—having the answers to numerous problems revealed to at least some of the agents is the only factor I’ve found that can get the simulated agents to improve their beliefs.
It’s difficult to apply the simulation results to people, who can, in theory, be convinced of something by following a logical argument. The reasons why I think we can model that with a simple per-person accuracy level might need a post of their own.
Oops—that statement was based on a bug in my program.
The usual situation does involve agents changing their answers as time passes differentially towards “true”—your model is extremely simplified, but [edit: may be] accurate enough for the purpose.