Relatedly, in the scenario (in some utterly absurd counterfactual world entirely unlike the real world) where agents sometimes misrepresent the evidence in a direction that favours their actual beliefs, it seems like the policy described here might well do better than the policy of updating fully on all evidence you’re presented with.
Given the limitations of our ability to investigate others’ honesty, it’s possible that the only options are factionalism or naivety and that the latter produces worse results than factionalism; e.g., if we happen to start with more people favouring (A,A,A) than (B,B,B) then rejecting “distrust those who disagree” may end up with everyone in the (A,A,A) corner, which is probably worse than having factions in all eight corners if the reality is (B,B,B).
As Zack says, what we want is a degree of trust that matches agents’ trustworthiness. But that may be extremely hard to obtain, and if all agents are somewhat untrustworthy (but some happen to be right so that their untrustworthiness does little harm) then having trust matching trustworthiness may produce exactly the sort of factional results reported here.
So I think the most interesting question is: Are there strategies that, even when agents may be untrustworthy in their reporting of the evidence, manage to converge to the truth over a reasonable portion of the space of untrustworthiness and actual-evidence-strength? My guesses: (1) yes, there kinda are, and the price they pay instead of factionalism is slowness; (2) if there is enough untrustworthiness relative to the actual strength of evidence then no strategy will give good results; (3) there are plenty of questions in the real world for which #2 holds. But I’m not terribly confident about any of those guesses.
Relatedly, in the scenario (in some utterly absurd counterfactual world entirely unlike the real world) where agents sometimes misrepresent the evidence in a direction that favours their actual beliefs, it seems like the policy described here might well do better than the policy of updating fully on all evidence you’re presented with.
Given the limitations of our ability to investigate others’ honesty, it’s possible that the only options are factionalism or naivety and that the latter produces worse results than factionalism; e.g., if we happen to start with more people favouring (A,A,A) than (B,B,B) then rejecting “distrust those who disagree” may end up with everyone in the (A,A,A) corner, which is probably worse than having factions in all eight corners if the reality is (B,B,B).
As Zack says, what we want is a degree of trust that matches agents’ trustworthiness. But that may be extremely hard to obtain, and if all agents are somewhat untrustworthy (but some happen to be right so that their untrustworthiness does little harm) then having trust matching trustworthiness may produce exactly the sort of factional results reported here.
So I think the most interesting question is: Are there strategies that, even when agents may be untrustworthy in their reporting of the evidence, manage to converge to the truth over a reasonable portion of the space of untrustworthiness and actual-evidence-strength? My guesses: (1) yes, there kinda are, and the price they pay instead of factionalism is slowness; (2) if there is enough untrustworthiness relative to the actual strength of evidence then no strategy will give good results; (3) there are plenty of questions in the real world for which #2 holds. But I’m not terribly confident about any of those guesses.