Is there any real-group analog to the answer to problem t becoming mutual knowledge to the entire group? I can’t think of a single disagreement here EVER to which the answer has been revealed. Further, I don’t expect much revelation until Omega actually shows up.
A bunch of people got the wrong answer, and it was presumed to be against your naive intuitions if you don’t know how to do the math. But any doubters understood the right answer once it was pointed out.
Thanks for recollecting that. That was a case where someone wrote a program to compute the answer, which could be taken as definitive.
I just counted up the first answers people gave, and their initial answers were 29 to 3 in favor of the correct answer. So there wasn’t much disagreement to begin with.
I don’t think that qualified. There was no revelation, just an agreement on process and on result. That was not a question analogous to PhilGoetz’s model, where some agents had more accurate estimates, and you use the result to determine how accurate they might be on other topics.
I can’t think of a single disagreement here to which the answer has been revealed, either. But—spoiler alert—having the answers to numerous problems revealed to at least some of the agents is the only factor I’ve found that can get the simulated agents to improve their beliefs.
It’s difficult to apply the simulation results to people, who can, in theory, be convinced of something by following a logical argument. The reasons why I think we can model that with a simple per-person accuracy level might need a post of their own.
having the answers to numerous problems revealed to at least some of the agents is the only factor I’ve found that can get the simulated agents to improve their beliefs.
Oops—that statement was based on a bug in my program.
The usual situation does involve agents changing their answers as time passes differentially towards “true”—your model is extremely simplified, but [edit: may be] accurate enough for the purpose.
Is there any real-group analog to the answer to problem t becoming mutual knowledge to the entire group? I can’t think of a single disagreement here EVER to which the answer has been revealed. Further, I don’t expect much revelation until Omega actually shows up.
Drawing Two Aces might count.
A bunch of people got the wrong answer, and it was presumed to be against your naive intuitions if you don’t know how to do the math. But any doubters understood the right answer once it was pointed out.
Thanks for recollecting that. That was a case where someone wrote a program to compute the answer, which could be taken as definitive.
I just counted up the first answers people gave, and their initial answers were 29 to 3 in favor of the correct answer. So there wasn’t much disagreement to begin with.
I don’t think that qualified. There was no revelation, just an agreement on process and on result. That was not a question analogous to PhilGoetz’s model, where some agents had more accurate estimates, and you use the result to determine how accurate they might be on other topics.
I can’t think of a single disagreement here to which the answer has been revealed, either. But—spoiler alert—having the answers to numerous problems revealed to at least some of the agents is the only factor I’ve found that can get the simulated agents to improve their beliefs.
It’s difficult to apply the simulation results to people, who can, in theory, be convinced of something by following a logical argument. The reasons why I think we can model that with a simple per-person accuracy level might need a post of their own.
Oops—that statement was based on a bug in my program.
The usual situation does involve agents changing their answers as time passes differentially towards “true”—your model is extremely simplified, but [edit: may be] accurate enough for the purpose.