This assumes that the error terms don’t correlate significantly, and that this is a case where Aumann’s Agreement applies.
Which considering one of these error terms is the estimation of someone’s rationality based little more than a few publicly-stated beliefs is perhaps a dangerous assumption to make.
This assumes that the error terms don’t correlate significantly, and that this is a case >where Aumann’s Agreement applies.
Which error terms are you referring to, and how would you do better?
Which considering one of these error terms is the estimation of someone’s rationality >based little more than a few publicly-stated beliefs is perhaps a dangerous assumption >to make.
Dangerous? It’s just that in those cases you have to have wide error bars. You can’t expect information to hurt.
As far as error terms, the reason why majority methods are often reliable is because they exploit the typical feature that the correct answer will correlate with itself (which is why we need Aumann’s to apply) and that the errors will not correlate significant with each other (which could be false if there is a strong attractor in the solution space—like a narrow pass past a cliff).
If these conditions apply, then your majority will be correct with a high degree of confidence. If not, then your confidence is much lower. The problem is that it is not clear how to determine whether these conditions apply without enough analysis of the problem as to make the majority method largely unnecessary.
Perhaps someone has a quick way, but thing like-indepth understanding of solution spaces and careful Aumann Agreement analysis seem costly pre-requisites for using majority methods. Personally, my approach would be to treat majority methods as potentially useful, but unreliable for these reasons. And to use prior evidence of correct or useful estimates, rather than estimations of rationality to base my weighting of the majority.
Of course the most evident danger comes from treating the methods as more confident than they are. But another danger is that estimating rationality as the basis of your method can easily degrade to taking the majority of favorable positions. Cognitive short-circuiting like this is very, very easy, and in this case the method is especially vulnerable to this sort of short-circuiting unless an extremely solid method of rationality estimation is packaged with it.
Shorter and simpler: if people base their beliefs on other people’s beliefs, without independently examining the evidence and reaching their own conclusions, you have easily generate massive consensus based on nothing at all.
This assumes that the error terms don’t correlate significantly, and that this is a case where Aumann’s Agreement applies.
Which considering one of these error terms is the estimation of someone’s rationality based little more than a few publicly-stated beliefs is perhaps a dangerous assumption to make.
Which error terms are you referring to, and how would you do better?
Dangerous? It’s just that in those cases you have to have wide error bars. You can’t expect information to hurt.
As far as error terms, the reason why majority methods are often reliable is because they exploit the typical feature that the correct answer will correlate with itself (which is why we need Aumann’s to apply) and that the errors will not correlate significant with each other (which could be false if there is a strong attractor in the solution space—like a narrow pass past a cliff).
If these conditions apply, then your majority will be correct with a high degree of confidence. If not, then your confidence is much lower. The problem is that it is not clear how to determine whether these conditions apply without enough analysis of the problem as to make the majority method largely unnecessary.
Perhaps someone has a quick way, but thing like-indepth understanding of solution spaces and careful Aumann Agreement analysis seem costly pre-requisites for using majority methods. Personally, my approach would be to treat majority methods as potentially useful, but unreliable for these reasons. And to use prior evidence of correct or useful estimates, rather than estimations of rationality to base my weighting of the majority.
Of course the most evident danger comes from treating the methods as more confident than they are. But another danger is that estimating rationality as the basis of your method can easily degrade to taking the majority of favorable positions. Cognitive short-circuiting like this is very, very easy, and in this case the method is especially vulnerable to this sort of short-circuiting unless an extremely solid method of rationality estimation is packaged with it.
Shorter and simpler: if people base their beliefs on other people’s beliefs, without independently examining the evidence and reaching their own conclusions, you have easily generate massive consensus based on nothing at all.