One thing I think is missing from your model is correlation between different answers, and I think that this is actually essential to the phenomenon: ignoring it makes it look like people are failing to come to agreement at all, when what’s actually happening is that they’re aligning into various ideological groups.
That is, there’s a big difference between a group of 100 people with independent answers on 10 binary questions (random fair coinflips), and two groups of 50 who disagree on each of the 10 binary questions. I think that if you compared LW newcomers with veterans, you’d find that the newcomers more resemble the first case, and veterans more the second. This would suggest that peoples’ answers are becoming more internally coherent, at least.
In particular, I expect that on this subject the veterans split roughly as follows:
Those who subscribe to Bostrom’s SIA and are Thirders (1/3 to 1⁄2 of the LW vets)
Those who subscribe to Bostrom’s SSA and are Halfers (less than 1⁄4)
Those who reject Bostromian anthropic probabilities entirely (less than 1⁄4)
One can easily predict the responses of the first two groups on subsequent questions.
I don’t build a model by looking at the observed results of a phenomena, and building in a special component to produce each observed result. You wouldn’t learn anything from your models if you did that; they would produce what you built them to produce. I build a model by enumerating the inputs, modeling each input, and seeing how much of the observed results the output matches.
When I run the simulation, people do in fact align into different groups. So far, always 2 groups. But the alignment process doesn’t give either group better overall accuracy. This shows that you don’t need any internal coherence or problem understanding for people to align into groups. Attributing accuracy to people who tend to agree with you, and inaccuracy to those who disagree with you, produces saddle-point dynamics. Once the initial random distribution gets off the saddle point, the groups on the opposite sides each rapidly converge to their own attractor.
What’s especially interesting is that this way of judging people’s accuracy doesn’t just cause different groups to converge to different points; it causes the groups to disagree with each other on every point. There isn’t one “right” group and one “wrong” group; there are two groups that are right about different things. Their agreement within a group on some topics indirectly causes them to take the opposite opinion on any topic on which other groups have strong opinions. In other words: My enemy’s belief P is evidence against P.
In particular, I expect that on this subject the veterans split roughly as follows:
OK, I see what you’re doing now. It’s an interesting model, though one feature jumps out at me now:
In other words: My enemy’s belief P is evidence against P.
Although this phenomenon is a well-known fallacy among human beings, it doesn’t seem like it should be the rational behavior— and then I noticed that the probabilities p_i can be less than 1⁄2 in your model, and that some of your agents are in fact reliably anti-correct. This seems like a probable cause of a binary group split, if I’m understanding correctly.
What’s the result if you make the probabilities (and accordingly, people’s estimates of the probabilities) range from 1⁄2 to 1 instead of from 0 to 1?
What’s the result if you make the probabilities (and accordingly, people’s estimates of the probabilities) range from 1⁄2 to 1 instead of from 0 to 1?
Then everybody converges onto agreeing on the correct answer for every question. And you just answered the question as to why Bayesians should agree to agree: Because Bayesians can’t perform worse than random on average, their accuracies range from 1⁄2 to 1, and are not biased on any problem (unless the evidence is biased, in which case you’re screwed anyway). Averaging their opinions together will thus get the right answer to every (answerable) question. Congratulations! You win 1 Internet!
(The reason for choosing 0 to 1 is explained in the post.)
Although this phenomenon is a well-known fallacy among human beings, it doesn’t seem like it should be the rational behavior
The behavior in my model is rational if the results indicate that it gets the right answer. So far, it looks look it doesn’t.
some of your agents are in fact reliably anti-correct. This seems like a probable cause of a binary group split, if I’m understanding correctly.
You could probably get the same answer by having some problems, rather than agents, usually be answered wrong. An abundance of wrong answers makes the agents split. The agents don’t split into the correct agents and the incorrect agents, at least not for the conditions I’ve tested. There doubtless are settings that would get them to do that.
Does the 2-group split stay even if you continue the simulation until all answers have been revealed?
If you increase the standard deviation of p[i] so there are more very right and very wrong guessers, do they tend to split more into right and wrong groups? I expect they would.
Does the 2-group split stay even if you continue the simulation until all answers have been revealed?
Good question—no; revelation of answers eventually causes convergence into 1 group.
If you increase the standard deviation of p[i] so there are more very right and very wrong guessers, do they tend to split more into right and wrong groups? I expect they would.
One thing I think is missing from your model is correlation between different answers, and I think that this is actually essential to the phenomenon: ignoring it makes it look like people are failing to come to agreement at all, when what’s actually happening is that they’re aligning into various ideological groups.
That is, there’s a big difference between a group of 100 people with independent answers on 10 binary questions (random fair coinflips), and two groups of 50 who disagree on each of the 10 binary questions. I think that if you compared LW newcomers with veterans, you’d find that the newcomers more resemble the first case, and veterans more the second. This would suggest that peoples’ answers are becoming more internally coherent, at least.
In particular, I expect that on this subject the veterans split roughly as follows:
Those who subscribe to Bostrom’s SIA and are Thirders (1/3 to 1⁄2 of the LW vets)
Those who subscribe to Bostrom’s SSA and are Halfers (less than 1⁄4)
Those who reject Bostromian anthropic probabilities entirely (less than 1⁄4)
One can easily predict the responses of the first two groups on subsequent questions.
I don’t build a model by looking at the observed results of a phenomena, and building in a special component to produce each observed result. You wouldn’t learn anything from your models if you did that; they would produce what you built them to produce. I build a model by enumerating the inputs, modeling each input, and seeing how much of the observed results the output matches.
When I run the simulation, people do in fact align into different groups. So far, always 2 groups. But the alignment process doesn’t give either group better overall accuracy. This shows that you don’t need any internal coherence or problem understanding for people to align into groups. Attributing accuracy to people who tend to agree with you, and inaccuracy to those who disagree with you, produces saddle-point dynamics. Once the initial random distribution gets off the saddle point, the groups on the opposite sides each rapidly converge to their own attractor.
What’s especially interesting is that this way of judging people’s accuracy doesn’t just cause different groups to converge to different points; it causes the groups to disagree with each other on every point. There isn’t one “right” group and one “wrong” group; there are two groups that are right about different things. Their agreement within a group on some topics indirectly causes them to take the opposite opinion on any topic on which other groups have strong opinions. In other words: My enemy’s belief P is evidence against P.
(Sleeping Beauty isn’t the subject of this post.)
OK, I see what you’re doing now. It’s an interesting model, though one feature jumps out at me now:
Although this phenomenon is a well-known fallacy among human beings, it doesn’t seem like it should be the rational behavior— and then I noticed that the probabilities p_i can be less than 1⁄2 in your model, and that some of your agents are in fact reliably anti-correct. This seems like a probable cause of a binary group split, if I’m understanding correctly.
What’s the result if you make the probabilities (and accordingly, people’s estimates of the probabilities) range from 1⁄2 to 1 instead of from 0 to 1?
Then everybody converges onto agreeing on the correct answer for every question. And you just answered the question as to why Bayesians should agree to agree: Because Bayesians can’t perform worse than random on average, their accuracies range from 1⁄2 to 1, and are not biased on any problem (unless the evidence is biased, in which case you’re screwed anyway). Averaging their opinions together will thus get the right answer to every (answerable) question. Congratulations! You win 1 Internet!
(The reason for choosing 0 to 1 is explained in the post.)
The behavior in my model is rational if the results indicate that it gets the right answer. So far, it looks look it doesn’t.
You could probably get the same answer by having some problems, rather than agents, usually be answered wrong. An abundance of wrong answers makes the agents split. The agents don’t split into the correct agents and the incorrect agents, at least not for the conditions I’ve tested. There doubtless are settings that would get them to do that.
Does the 2-group split stay even if you continue the simulation until all answers have been revealed?
If you increase the standard deviation of p[i] so there are more very right and very wrong guessers, do they tend to split more into right and wrong groups? I expect they would.
Good question—no; revelation of answers eventually causes convergence into 1 group.
It makes the splitting happen faster.