take into account that the opinions of other group members may have been caused by swaying rather than independent analysis, but that was already true in the individual accuracy case.
Right, I see. For a group of perfect rationalists, yes, I agree, at least to an extent.
The problem is that this is very hard to do in reality. If I have 15 commenters down/upvote a post or comment I make on LW, how do I know to what extent they’re providing 15 distinct opinions vs. 1 opinion followed by 14 swingers? How do I estimate the swinginness coefficient? It seems that group rationality is maximized if individuals state their own opinions on a particular question independently of the group, and only update once a really overwhelming consensus is reached, some time after that particular discussion is over. The group’s decision is then the average on n independent opinions. This would make for a very clever group iff each individual is quite clever.
I should emphasize: this will mean that the group (overall) displays smart behavior, but that the individuals do worse than they otherwise would.
Also, how relevant is Robin’s paper on Aumann’s agreement theorem for wannabe/imperfect bayesians to this debate? It seems that he might (under certain assumptions) have proved the opposite f what I’m claiming here.
Roko, when you run into a case of “group win / individual loss” on epistemic rationality you should consider that a Can’t Happen, like violating conservation of momentum or something.
In this case, you need to communicate one kind of information (likelihood ratios) and update on the product of those likelihood ratios, rather than trying to communicate the final belief. But the Can’t Happen is a general rule.
Roko, when you run into a case of “group win / individual loss” on epistemic rationality you should consider that a Can’t Happen, like violating conservation of momentum or something.
Really!? No exceptions?
This doesn’t feel right. If it is right, it sounds important. Please could you elaborate?
Right, I see. For a group of perfect rationalists, yes, I agree, at least to an extent.
The problem is that this is very hard to do in reality. If I have 15 commenters down/upvote a post or comment I make on LW, how do I know to what extent they’re providing 15 distinct opinions vs. 1 opinion followed by 14 swingers? How do I estimate the swinginness coefficient? It seems that group rationality is maximized if individuals state their own opinions on a particular question independently of the group, and only update once a really overwhelming consensus is reached, some time after that particular discussion is over. The group’s decision is then the average on n independent opinions. This would make for a very clever group iff each individual is quite clever.
I should emphasize: this will mean that the group (overall) displays smart behavior, but that the individuals do worse than they otherwise would.
Also, how relevant is Robin’s paper on Aumann’s agreement theorem for wannabe/imperfect bayesians to this debate? It seems that he might (under certain assumptions) have proved the opposite f what I’m claiming here.
Roko, when you run into a case of “group win / individual loss” on epistemic rationality you should consider that a Can’t Happen, like violating conservation of momentum or something.
In this case, you need to communicate one kind of information (likelihood ratios) and update on the product of those likelihood ratios, rather than trying to communicate the final belief. But the Can’t Happen is a general rule.
I’d like to see a proof if it’s that fundamental. Is it theorem x.xx in one of the Aumann agreement papers?
Really!? No exceptions?
This doesn’t feel right. If it is right, it sounds important. Please could you elaborate?