This is indeed a conundrum! Ultimately, I think it is possible to do better, and that doing better sort of looks like biting the bullet on “discount arguments for being convincing and coming from people you trust”, but that that’s a somewhat misleading paraphrase: more precisely than “discounting” evidence from the people you trust, you want to be “accounting for the possibility of correlated errors” in evidence from the people you trust.
In “Comment on ‘Endogenous Epistemic Factionalization’”, I investigated a toy model by James Owen Weatherall and Cailin O’Connor in which populations of agents that only update on evidence from agents with similar beliefs, end up polarizing into factions, most of which are wrong about some things.
In that model, if the agents update on everyone’s reports (rather than only those from agents with already-similar beliefs in proportion to that similarity), then they converge to the truth. This would seem to recommend a moral of: don’t trust your very smart friends just because they’re your friends; instead, trust the aggregate of all the very smart people in the world (in proportion to how very smart they are).
But this moral doesn’t seem like particularly tractable advice. Sure, it would be better to read more widely from all the very smart authors in the world with different cultures and backgrounds and interests than my friends, but I don’t have the spare time for that. In practice, I am going to end up paying more attention to my friends’ arguments, because I spend more time talking to my friends than anyone else. So, I’m stuck … right?
Not entirely. The glory of subjective probability is that when you don’t know, you can just say so. To the extent that I think I would have had different beliefs if I had different but equally very smart friends, I should be including that in my model of the relationship between the world and my friends’ beliefs. The extent to which I don’t know how the argument would shake out if I could exhaustively debate my alternate selves who fell in with different very smart friend groups, is a force that should make me generically less confident in my current beliefs, spreading probability-mass onto more possibilities corresponding to the beliefs of alternate selves with alternate very smart friends, who I don’t have the computational power to sync up with.
I think one way of dealing with the uncertainty of whom you can trust is to ‘live in both worlds’ - at least probabilistically. THis is nicely illustrated in this Dath Ilan fiction: https://www.glowfic.com/board_sections/703
This is indeed a conundrum! Ultimately, I think it is possible to do better, and that doing better sort of looks like biting the bullet on “discount arguments for being convincing and coming from people you trust”, but that that’s a somewhat misleading paraphrase: more precisely than “discounting” evidence from the people you trust, you want to be “accounting for the possibility of correlated errors” in evidence from the people you trust.
In “Comment on ‘Endogenous Epistemic Factionalization’”, I investigated a toy model by James Owen Weatherall and Cailin O’Connor in which populations of agents that only update on evidence from agents with similar beliefs, end up polarizing into factions, most of which are wrong about some things.
In that model, if the agents update on everyone’s reports (rather than only those from agents with already-similar beliefs in proportion to that similarity), then they converge to the truth. This would seem to recommend a moral of: don’t trust your very smart friends just because they’re your friends; instead, trust the aggregate of all the very smart people in the world (in proportion to how very smart they are).
But this moral doesn’t seem like particularly tractable advice. Sure, it would be better to read more widely from all the very smart authors in the world with different cultures and backgrounds and interests than my friends, but I don’t have the spare time for that. In practice, I am going to end up paying more attention to my friends’ arguments, because I spend more time talking to my friends than anyone else. So, I’m stuck … right?
Not entirely. The glory of subjective probability is that when you don’t know, you can just say so. To the extent that I think I would have had different beliefs if I had different but equally very smart friends, I should be including that in my model of the relationship between the world and my friends’ beliefs. The extent to which I don’t know how the argument would shake out if I could exhaustively debate my alternate selves who fell in with different very smart friend groups, is a force that should make me generically less confident in my current beliefs, spreading probability-mass onto more possibilities corresponding to the beliefs of alternate selves with alternate very smart friends, who I don’t have the computational power to sync up with.
I think one way of dealing with the uncertainty of whom you can trust is to ‘live in both worlds’ - at least probabilistically. THis is nicely illustrated in this Dath Ilan fiction: https://www.glowfic.com/board_sections/703