Can you list a bunch of examples of groups as well as examples of irrational beliefs each group has that you have in mind, so we can discuss them more concretely?
I’m not sure religion is a strong counterexample? Religion has declined a lot after the discovery of evolution, and from what I understand, it used to earn its trust by serving as the center for life wisdom and morality. Even today, people typically become religious because people they rationally trust a lot (their parents) attribute great things to the religions.
Certainly there are pathologies that arise, which religion is guilty of lots of, where the trust leads to false beliefs, systemic vulnerabilities, etc.. This is related to the theory of collective rationality that I briefly allude to when saying:
(I think this has massive implications for collective epistemics, and I’ve gradually been developing a theory of collective rationality based on this, but it’s not finished yet and the purpose of this post is merely to grok the agreement theorem rather than to lay out that theory.)
I plan on writing more on this later, some of which might dissect the pathologies of religion.
You seem to be using the term “rational” in a different way from how I use it, if you restrict it so that e.g. babies don’t count as using rational inference methods.
Well, I think I’m using in it in a way that’s appropriate for Aumamns theorem—involving a level of conscious, reflective awareness about your own thought processes and those of others
It would have been helpful to tell me what you mean.
I mean “rational” as in “an algorithm which produces map-territory correspondences”. So for instance we have two reasons to believe that “trust your parents” is a rational heuristic for a baby (i.e. that trusting your parents produces map-territory correspondences, such as a belief of “my parents are trustworthy” which corresponds to a territory of having trustworthy parents):
Mechanistically, parents are sampled from humans who we know are somewhat trustworthy in general, and they are conditioned on having children, which probably positively correlates with or at least doesn’t negatively correlate with trustworthyness. Relationally, parents tend to care about their children and so are especially trustworthy to their children.
Logically, trusting your parents tends to produce a lot of beliefs, and evolutionarily, if those beliefs tended to be mistaken (e.g. if your parents would tend to encourage you to do dangerous stuff, rather than warn you about dangerous stuff), then that would be selected against, leading people to not trust their parents. So the fact that they do trust their parents is evidence that trusting one’s parents is (generally) rational.
I mean “rational” as in “an algorithm which produces map-territory correspondences”
Most such algorithms aren’t the Aumann mechanism.
So for instance we have two reasons to believe that “trust your parents” is a rational heuristic for a baby
Infants have no choice but to trust their parents: they can’t swap them for other parents and they can’t go it alone. So they would trust their parents even if their parents were irrational. So their trust doesn’t have to be the outcome of any rational mechanism, least of all Auman Agreement.
(i.e. that trusting your parents produces map-territory correspondences, such as a belief of “my parents are trustworthy” which corresponds to a territory of having trustworthy parents):
Mechanistically, parents are sampled from humans who we know are somewhat trustworthy in general, and they are conditioned on having children, which probably positively correlates with or at least doesn’t negatively correlate with trustworthyness. Relationally, parents tend to care about their children and so are especially trustworthy to their children.
Logically, trusting your parents tends to produce a lot of beliefs, and evolutionarily, if those beliefs tended to be mistaken (e.g. if your parents would tend to encourage you to do dangerous stuff, rather than warn you about dangerous stuff), then that would be selected against, leading people to not trust their parents. So the fact that they do trust their parents is evidence that trusting one’s parents is (generally) rational.
Evolution targets usefulness rather than correspondence. The two can coincide, but don’t have to. Worse still, beliefs that are neutral in usefulness, neither useful nor harmful, will be passed down the generations like junk DNA, because the mechanisms of familial and tribal trust just aren’t that rational, and don’t involve checking for correspondence-truth. If you look at the wider non-WEIRD world, there is abundant evidence of such arbitrary beliefs. Checking for correspondence truth had to be invented separately: it’s called science.
If an algorithm does not act in accordance to Aumann’s agreement theorem, then it can be made more effective in producing truth by adding Aumann mechanisms to it.
Infants have no choice but to trust their parents: they can’t swap them for other parents and they can’t go it alone. So they would trust their parents even if their parents were irrational. So their trust doesn’t have to be the outcome of any rational mechanism, least of all Auman Agreement.
The trust is an outcome of evolution, which is a rational process.
Evolution targets usefulness rather than correspondence. The two can coincide, but don’t have to.
Evolution is quantitatively a way more rational mechanism than e.g. assigning base pairs randomly. It’s not the same as correspondence, but rationality is a matter of degree.
If an algorithm does not act in accordance to Aumann’s agreement theorem, then it can be made more effective in producing truth by adding Aumann mechanisms to it.
That’s different to the claim that there’s a lot of Aumamn about already.
I should say, by the way you talk about these things, it sounds like you have a purpose or application in mind of the concepts in question where my definitions don’t work. However I don’t know what this application is, and so I can’t grant you that it holds or give my analysis of it.
If instead of poking holes in my analysis, you described your application and showed how your impression of my analysis gives the wrong results in that application, then I think the conversation could proceed more effectively.
Yes, but I still don’t see what practical real-world errors people could make by seeing the things mentioned in my post as examples of Aumann’s agreement theorem.
Believing in God leads to tons of real-world errors I think.
One foundation for rationalism is pragmatism, due to e.g. Dutch book arguments, value of information, etc..
More generally, when deciding what to abstract as basically similar vs fundamentally different, it is usually critical to ask “for what purpose?”, since one can of course draw hopelessly blurry or annoyingly fine-grained distinctions if one doesn’t have a goal to constrain one’s categorization method.
I don’t think epistemic rationality doesn’t matter, but obviously since human brains are much smaller than the universe, our minds cannot distinguish between every distinction that exists, and therefore we need some abstraction. This abstraction is best done in the context of some purpose one cares about, as then one can backchain into what distinctions do vs do not matter.
Can you list a bunch of examples of groups as well as examples of irrational beliefs each group has that you have in mind, so we can discuss them more concretely?
Religion and stuff.
I’m not sure religion is a strong counterexample? Religion has declined a lot after the discovery of evolution, and from what I understand, it used to earn its trust by serving as the center for life wisdom and morality. Even today, people typically become religious because people they rationally trust a lot (their parents) attribute great things to the religions.
Certainly there are pathologies that arise, which religion is guilty of lots of, where the trust leads to false beliefs, systemic vulnerabilities, etc.. This is related to the theory of collective rationality that I briefly allude to when saying:
I plan on writing more on this later, some of which might dissect the pathologies of religion.
90% of the world is religious. And look at polarised political as well.
You have no evidence that it is rational. Kids trust their parents before they reach the age of reason.
You seem to be using the term “rational” in a different way from how I use it, if you restrict it so that e.g. babies don’t count as using rational inference methods.
Well, I think I’m using in it in a way that’s appropriate for Aumamns theorem—involving a level of conscious, reflective awareness about your own thought processes and those of others
It would have been helpful to tell me what you mean.
I mean “rational” as in “an algorithm which produces map-territory correspondences”. So for instance we have two reasons to believe that “trust your parents” is a rational heuristic for a baby (i.e. that trusting your parents produces map-territory correspondences, such as a belief of “my parents are trustworthy” which corresponds to a territory of having trustworthy parents):
Mechanistically, parents are sampled from humans who we know are somewhat trustworthy in general, and they are conditioned on having children, which probably positively correlates with or at least doesn’t negatively correlate with trustworthyness. Relationally, parents tend to care about their children and so are especially trustworthy to their children.
Logically, trusting your parents tends to produce a lot of beliefs, and evolutionarily, if those beliefs tended to be mistaken (e.g. if your parents would tend to encourage you to do dangerous stuff, rather than warn you about dangerous stuff), then that would be selected against, leading people to not trust their parents. So the fact that they do trust their parents is evidence that trusting one’s parents is (generally) rational.
Most such algorithms aren’t the Aumann mechanism.
Infants have no choice but to trust their parents: they can’t swap them for other parents and they can’t go it alone. So they would trust their parents even if their parents were irrational. So their trust doesn’t have to be the outcome of any rational mechanism, least of all Auman Agreement.
Evolution targets usefulness rather than correspondence. The two can coincide, but don’t have to. Worse still, beliefs that are neutral in usefulness, neither useful nor harmful, will be passed down the generations like junk DNA, because the mechanisms of familial and tribal trust just aren’t that rational, and don’t involve checking for correspondence-truth. If you look at the wider non-WEIRD world, there is abundant evidence of such arbitrary beliefs. Checking for correspondence truth had to be invented separately: it’s called science.
If an algorithm does not act in accordance to Aumann’s agreement theorem, then it can be made more effective in producing truth by adding Aumann mechanisms to it.
The trust is an outcome of evolution, which is a rational process.
Evolution is quantitatively a way more rational mechanism than e.g. assigning base pairs randomly. It’s not the same as correspondence, but rationality is a matter of degree.
That’s different to the claim that there’s a lot of Aumamn about already.
And kind!
I should say, by the way you talk about these things, it sounds like you have a purpose or application in mind of the concepts in question where my definitions don’t work. However I don’t know what this application is, and so I can’t grant you that it holds or give my analysis of it.
If instead of poking holes in my analysis, you described your application and showed how your impression of my analysis gives the wrong results in that application, then I think the conversation could proceed more effectively.
There is a lot of Aumann already because non-Aumannian algorithms for obtaining information can be improved by making them Aumannian.
There isn’t a lot of Aumamn around already, because it requires knowledge of priors, not some vaguely defined trust.
https://www.lesswrong.com/posts/ybKP6e5K7e2o7dSgP/don-t-get-distracted-by-the-boilerplate
...doesn’t prove that the boilerplate never matters. It just cherry picks a few cases where it doesnt.
Yes, but I still don’t see what practical real-world errors people could make by seeing the things mentioned in my post as examples of Aumann’s agreement theorem.
maybe believing in God and the Copenhagen Interpretation doesn’t lead to real world errors, either. But rationalism isn’t pragmatism.
Believing in God leads to tons of real-world errors I think.
One foundation for rationalism is pragmatism, due to e.g. Dutch book arguments, value of information, etc..
More generally, when deciding what to abstract as basically similar vs fundamentally different, it is usually critical to ask “for what purpose?”, since one can of course draw hopelessly blurry or annoyingly fine-grained distinctions if one doesn’t have a goal to constrain one’s categorization method.
But that was never fully resolved. An if you are going to adopt “epistemic rationality doesn’t matter” as a premise, you need to make it explicit.
I don’t think epistemic rationality doesn’t matter, but obviously since human brains are much smaller than the universe, our minds cannot distinguish between every distinction that exists, and therefore we need some abstraction. This abstraction is best done in the context of some purpose one cares about, as then one can backchain into what distinctions do vs do not matter.
But that was never fully resolved. And if you are going to adopt “epistemic rationality doesn’t matter” as a premise, you need to make it explicit.