Yes, but I still don’t see what practical real-world errors people could make by seeing the things mentioned in my post as examples of Aumann’s agreement theorem.
Believing in God leads to tons of real-world errors I think.
One foundation for rationalism is pragmatism, due to e.g. Dutch book arguments, value of information, etc..
More generally, when deciding what to abstract as basically similar vs fundamentally different, it is usually critical to ask “for what purpose?”, since one can of course draw hopelessly blurry or annoyingly fine-grained distinctions if one doesn’t have a goal to constrain one’s categorization method.
I don’t think epistemic rationality doesn’t matter, but obviously since human brains are much smaller than the universe, our minds cannot distinguish between every distinction that exists, and therefore we need some abstraction. This abstraction is best done in the context of some purpose one cares about, as then one can backchain into what distinctions do vs do not matter.
There is a lot of Aumann already because non-Aumannian algorithms for obtaining information can be improved by making them Aumannian.
There isn’t a lot of Aumamn around already, because it requires knowledge of priors, not some vaguely defined trust.
https://www.lesswrong.com/posts/ybKP6e5K7e2o7dSgP/don-t-get-distracted-by-the-boilerplate
...doesn’t prove that the boilerplate never matters. It just cherry picks a few cases where it doesnt.
Yes, but I still don’t see what practical real-world errors people could make by seeing the things mentioned in my post as examples of Aumann’s agreement theorem.
maybe believing in God and the Copenhagen Interpretation doesn’t lead to real world errors, either. But rationalism isn’t pragmatism.
Believing in God leads to tons of real-world errors I think.
One foundation for rationalism is pragmatism, due to e.g. Dutch book arguments, value of information, etc..
More generally, when deciding what to abstract as basically similar vs fundamentally different, it is usually critical to ask “for what purpose?”, since one can of course draw hopelessly blurry or annoyingly fine-grained distinctions if one doesn’t have a goal to constrain one’s categorization method.
But that was never fully resolved. An if you are going to adopt “epistemic rationality doesn’t matter” as a premise, you need to make it explicit.
I don’t think epistemic rationality doesn’t matter, but obviously since human brains are much smaller than the universe, our minds cannot distinguish between every distinction that exists, and therefore we need some abstraction. This abstraction is best done in the context of some purpose one cares about, as then one can backchain into what distinctions do vs do not matter.
But that was never fully resolved. And if you are going to adopt “epistemic rationality doesn’t matter” as a premise, you need to make it explicit.