If an algorithm does not act in accordance to Aumann’s agreement theorem, then it can be made more effective in producing truth by adding Aumann mechanisms to it.
Infants have no choice but to trust their parents: they can’t swap them for other parents and they can’t go it alone. So they would trust their parents even if their parents were irrational. So their trust doesn’t have to be the outcome of any rational mechanism, least of all Auman Agreement.
The trust is an outcome of evolution, which is a rational process.
Evolution targets usefulness rather than correspondence. The two can coincide, but don’t have to.
Evolution is quantitatively a way more rational mechanism than e.g. assigning base pairs randomly. It’s not the same as correspondence, but rationality is a matter of degree.
If an algorithm does not act in accordance to Aumann’s agreement theorem, then it can be made more effective in producing truth by adding Aumann mechanisms to it.
That’s different to the claim that there’s a lot of Aumamn about already.
I should say, by the way you talk about these things, it sounds like you have a purpose or application in mind of the concepts in question where my definitions don’t work. However I don’t know what this application is, and so I can’t grant you that it holds or give my analysis of it.
If instead of poking holes in my analysis, you described your application and showed how your impression of my analysis gives the wrong results in that application, then I think the conversation could proceed more effectively.
Yes, but I still don’t see what practical real-world errors people could make by seeing the things mentioned in my post as examples of Aumann’s agreement theorem.
Believing in God leads to tons of real-world errors I think.
One foundation for rationalism is pragmatism, due to e.g. Dutch book arguments, value of information, etc..
More generally, when deciding what to abstract as basically similar vs fundamentally different, it is usually critical to ask “for what purpose?”, since one can of course draw hopelessly blurry or annoyingly fine-grained distinctions if one doesn’t have a goal to constrain one’s categorization method.
I don’t think epistemic rationality doesn’t matter, but obviously since human brains are much smaller than the universe, our minds cannot distinguish between every distinction that exists, and therefore we need some abstraction. This abstraction is best done in the context of some purpose one cares about, as then one can backchain into what distinctions do vs do not matter.
If an algorithm does not act in accordance to Aumann’s agreement theorem, then it can be made more effective in producing truth by adding Aumann mechanisms to it.
The trust is an outcome of evolution, which is a rational process.
Evolution is quantitatively a way more rational mechanism than e.g. assigning base pairs randomly. It’s not the same as correspondence, but rationality is a matter of degree.
That’s different to the claim that there’s a lot of Aumamn about already.
And kind!
I should say, by the way you talk about these things, it sounds like you have a purpose or application in mind of the concepts in question where my definitions don’t work. However I don’t know what this application is, and so I can’t grant you that it holds or give my analysis of it.
If instead of poking holes in my analysis, you described your application and showed how your impression of my analysis gives the wrong results in that application, then I think the conversation could proceed more effectively.
There is a lot of Aumann already because non-Aumannian algorithms for obtaining information can be improved by making them Aumannian.
There isn’t a lot of Aumamn around already, because it requires knowledge of priors, not some vaguely defined trust.
https://www.lesswrong.com/posts/ybKP6e5K7e2o7dSgP/don-t-get-distracted-by-the-boilerplate
...doesn’t prove that the boilerplate never matters. It just cherry picks a few cases where it doesnt.
Yes, but I still don’t see what practical real-world errors people could make by seeing the things mentioned in my post as examples of Aumann’s agreement theorem.
maybe believing in God and the Copenhagen Interpretation doesn’t lead to real world errors, either. But rationalism isn’t pragmatism.
Believing in God leads to tons of real-world errors I think.
One foundation for rationalism is pragmatism, due to e.g. Dutch book arguments, value of information, etc..
More generally, when deciding what to abstract as basically similar vs fundamentally different, it is usually critical to ask “for what purpose?”, since one can of course draw hopelessly blurry or annoyingly fine-grained distinctions if one doesn’t have a goal to constrain one’s categorization method.
But that was never fully resolved. An if you are going to adopt “epistemic rationality doesn’t matter” as a premise, you need to make it explicit.
I don’t think epistemic rationality doesn’t matter, but obviously since human brains are much smaller than the universe, our minds cannot distinguish between every distinction that exists, and therefore we need some abstraction. This abstraction is best done in the context of some purpose one cares about, as then one can backchain into what distinctions do vs do not matter.
But that was never fully resolved. And if you are going to adopt “epistemic rationality doesn’t matter” as a premise, you need to make it explicit.