I am aware that the reason I believe a good number of the things I believe is that I’m surrounded by very smart people who are full of good arguments for those things, rather than because I have done a lot of my own independent research. If I had different very smart friends who had a different set of good arguments, my beliefs would almost certainly look very different—not because I consciously adopt the strategy “believe what my friends believe,” but because my friends are smart and good at expressing themselves, and if they all believe something I have heard a lot of very good and persuasive arguments for it.
This is really not very truth-seeking! I would like to believe things for better reasons than “all of my friends believe it and they’re very smart”!
However, all of the ways that I have to counteract this effect boil down to “discount arguments for being convincing and coming from people you trust,” which is even less truth-seeking.
Together with Bayes’s formula (which in practice is mostly remaining aware of base rates when evidence comes to light), another key point about reasoning under uncertainty is to avoid it whenever possible. Like with long-term irrelevance of news, cognitive and methodological overhead makes uncertain knowledge less useful. There are exceptions, like you do want to keep track of news about an uncertain prospect of a war breaking out in your country, but all else equal this is not the kind of thing that’s worth worrying about too much. And certainty is not the same as consensus or being well-known to interested people, since there are things that can be understood. If you study something seriously, there are many observations that can be made with certainty, mostly very hypothetical or abstract ones, that almost nobody else made. Truth-seeking is not about seeking all available truths, or else you might as well memorize white noise all day long.
You could try to explicitly model how correlated your friends are. Do they all talk to each other and reach consensus? Then from 10 friends you’re not getting 10 opinions, it could be more like 2 (or could even be <1, if they’re not keeping track of who is confident based on what evidence, cf. https://en.wikipedia.org/wiki/Information_cascade ). Do most of them use mostly the same strategies for evaluating something (same strategies to search for information, to find counterarguments, to think of ways they made unhelpful assumptions, etc. etc.)? Then you’ve got correlated error, and you might be able to help them and yourself by trying to notice when other groups of people do useful work where your friends are failing to do useful work (though the other group may have their own systematic errors).
Become unpersuadable by bad arguments. Seek the best arguments both for and against a proposition. And accept that you’ll never be epistemically self-sufficient in all domains.
When I realized this it helped me empathize much better with people who seem to know less—they make do with what they got!
I realize this doesn’t answer your problem. I’m not sure there is a full answer but I think some progress can be made on understanding the mechanism you have outlined. See that it works, understand why it works, and when it seems to break down.
This is indeed a conundrum! Ultimately, I think it is possible to do better, and that doing better sort of looks like biting the bullet on “discount arguments for being convincing and coming from people you trust”, but that that’s a somewhat misleading paraphrase: more precisely than “discounting” evidence from the people you trust, you want to be “accounting for the possibility of correlated errors” in evidence from the people you trust.
In “Comment on ‘Endogenous Epistemic Factionalization’”, I investigated a toy model by James Owen Weatherall and Cailin O’Connor in which populations of agents that only update on evidence from agents with similar beliefs, end up polarizing into factions, most of which are wrong about some things.
In that model, if the agents update on everyone’s reports (rather than only those from agents with already-similar beliefs in proportion to that similarity), then they converge to the truth. This would seem to recommend a moral of: don’t trust your very smart friends just because they’re your friends; instead, trust the aggregate of all the very smart people in the world (in proportion to how very smart they are).
But this moral doesn’t seem like particularly tractable advice. Sure, it would be better to read more widely from all the very smart authors in the world with different cultures and backgrounds and interests than my friends, but I don’t have the spare time for that. In practice, I am going to end up paying more attention to my friends’ arguments, because I spend more time talking to my friends than anyone else. So, I’m stuck … right?
Not entirely. The glory of subjective probability is that when you don’t know, you can just say so. To the extent that I think I would have had different beliefs if I had different but equally very smart friends, I should be including that in my model of the relationship between the world and my friends’ beliefs. The extent to which I don’t know how the argument would shake out if I could exhaustively debate my alternate selves who fell in with different very smart friend groups, is a force that should make me generically less confident in my current beliefs, spreading probability-mass onto more possibilities corresponding to the beliefs of alternate selves with alternate very smart friends, who I don’t have the computational power to sync up with.
I think one way of dealing with the uncertainty of whom you can trust is to ‘live in both worlds’ - at least probabilistically. THis is nicely illustrated in this Dath Ilan fiction: https://www.glowfic.com/board_sections/703
With limited intelligence and limited time, I will never be correct about everything. That sucks, but such is life. I can still keep trying to do better, despite knowing that the results will never be perfect.
I try to listen to people who are not my friends (or to make/keep friends outside my usual bubbles), even if they are obviously wrong about some things… because they still might be right about some other things. I try to listen to it all, then filter it out somehow. But time is limited, so I do not do this too often.
A typical outcome is that my opinions are similar to the opinions of my smart friends, but with way less certainty; plus an occassional fringe belief. And yes, even this is not perfect.
A problem, in three parts:
I am aware that the reason I believe a good number of the things I believe is that I’m surrounded by very smart people who are full of good arguments for those things, rather than because I have done a lot of my own independent research. If I had different very smart friends who had a different set of good arguments, my beliefs would almost certainly look very different—not because I consciously adopt the strategy “believe what my friends believe,” but because my friends are smart and good at expressing themselves, and if they all believe something I have heard a lot of very good and persuasive arguments for it.
This is really not very truth-seeking! I would like to believe things for better reasons than “all of my friends believe it and they’re very smart”!
However, all of the ways that I have to counteract this effect boil down to “discount arguments for being convincing and coming from people you trust,” which is even less truth-seeking.
I have no idea what to do about this.
Together with Bayes’s formula (which in practice is mostly remaining aware of base rates when evidence comes to light), another key point about reasoning under uncertainty is to avoid it whenever possible. Like with long-term irrelevance of news, cognitive and methodological overhead makes uncertain knowledge less useful. There are exceptions, like you do want to keep track of news about an uncertain prospect of a war breaking out in your country, but all else equal this is not the kind of thing that’s worth worrying about too much. And certainty is not the same as consensus or being well-known to interested people, since there are things that can be understood. If you study something seriously, there are many observations that can be made with certainty, mostly very hypothetical or abstract ones, that almost nobody else made. Truth-seeking is not about seeking all available truths, or else you might as well memorize white noise all day long.
When I realized that most news did not influence me, i.e., change my behavior or let me update my world model, I stopped reading it.
You could try to explicitly model how correlated your friends are. Do they all talk to each other and reach consensus? Then from 10 friends you’re not getting 10 opinions, it could be more like 2 (or could even be <1, if they’re not keeping track of who is confident based on what evidence, cf. https://en.wikipedia.org/wiki/Information_cascade ). Do most of them use mostly the same strategies for evaluating something (same strategies to search for information, to find counterarguments, to think of ways they made unhelpful assumptions, etc. etc.)? Then you’ve got correlated error, and you might be able to help them and yourself by trying to notice when other groups of people do useful work where your friends are failing to do useful work (though the other group may have their own systematic errors).
Become unpersuadable by bad arguments. Seek the best arguments both for and against a proposition. And accept that you’ll never be epistemically self-sufficient in all domains.
When I realized this it helped me empathize much better with people who seem to know less—they make do with what they got!
I realize this doesn’t answer your problem. I’m not sure there is a full answer but I think some progress can be made on understanding the mechanism you have outlined. See that it works, understand why it works, and when it seems to break down.
This is indeed a conundrum! Ultimately, I think it is possible to do better, and that doing better sort of looks like biting the bullet on “discount arguments for being convincing and coming from people you trust”, but that that’s a somewhat misleading paraphrase: more precisely than “discounting” evidence from the people you trust, you want to be “accounting for the possibility of correlated errors” in evidence from the people you trust.
In “Comment on ‘Endogenous Epistemic Factionalization’”, I investigated a toy model by James Owen Weatherall and Cailin O’Connor in which populations of agents that only update on evidence from agents with similar beliefs, end up polarizing into factions, most of which are wrong about some things.
In that model, if the agents update on everyone’s reports (rather than only those from agents with already-similar beliefs in proportion to that similarity), then they converge to the truth. This would seem to recommend a moral of: don’t trust your very smart friends just because they’re your friends; instead, trust the aggregate of all the very smart people in the world (in proportion to how very smart they are).
But this moral doesn’t seem like particularly tractable advice. Sure, it would be better to read more widely from all the very smart authors in the world with different cultures and backgrounds and interests than my friends, but I don’t have the spare time for that. In practice, I am going to end up paying more attention to my friends’ arguments, because I spend more time talking to my friends than anyone else. So, I’m stuck … right?
Not entirely. The glory of subjective probability is that when you don’t know, you can just say so. To the extent that I think I would have had different beliefs if I had different but equally very smart friends, I should be including that in my model of the relationship between the world and my friends’ beliefs. The extent to which I don’t know how the argument would shake out if I could exhaustively debate my alternate selves who fell in with different very smart friend groups, is a force that should make me generically less confident in my current beliefs, spreading probability-mass onto more possibilities corresponding to the beliefs of alternate selves with alternate very smart friends, who I don’t have the computational power to sync up with.
I think one way of dealing with the uncertainty of whom you can trust is to ‘live in both worlds’ - at least probabilistically. THis is nicely illustrated in this Dath Ilan fiction: https://www.glowfic.com/board_sections/703
With limited intelligence and limited time, I will never be correct about everything. That sucks, but such is life. I can still keep trying to do better, despite knowing that the results will never be perfect.
I try to listen to people who are not my friends (or to make/keep friends outside my usual bubbles), even if they are obviously wrong about some things… because they still might be right about some other things. I try to listen to it all, then filter it out somehow. But time is limited, so I do not do this too often.
A typical outcome is that my opinions are similar to the opinions of my smart friends, but with way less certainty; plus an occassional fringe belief. And yes, even this is not perfect.